Disruptive Methods
Participatory Democracy
Participatory processes are increasingly employed by governments to explore new opportunities for democratic citizen engagement. These processes go beyond voting, and allow citizens and other stakeholders (e.g. civil society organizations, governments, academia, and/or businesses) to take part in ideating, debating, and implementing initiatives in the public sphere. In participatory approaches, citizens are invited to go beyond voting and contribute to drafting proposals, debating them, and implementing them in collaboration with local governments and other stakeholders.
Participatory budgeting is a common form of participatory democracy in recent years, where citizens vote to determine the allocation of particular funds (determined by local authorities) oriented towards community projects. The projects being voted upon are often co-created and generated from citizens’ ideas, to encourage deliberation amongst the community and more ownership over shared public resources. Firstly introduced by Porto Allegre in 1989, participatory budgeting is increasingly adopted by various countries in all continents. Participatory budgeting is becoming widespread in Europe following its initial popularization in the early 2000s.
Other forms of participatory democracy are similarly adapting to the changing attitudes towards governance, technical developments, and drastic changes in the public sphere (due to societal disruptions like social media proliferation and Covid-19). Participatory democracy, often aided through the use of online voting and deliberation platforms, is currently being applied to various fields in the public domain including environmental monitoring, urban planning, education, and transportation and mobility.
Criticisms of participatory democracy note:
- Limitations and difficulties in including a diverse, representative portion of the population;
- A tendency towards over-reliance on technology (which in turn may exclude certain people from the process);
- The scope (for example, budget, topic, or area) of participation is often limited and pre-determined by authorities;
- The difficulties, uncertainties, and long-term efforts which are inherent in participation.
The field of participatory democracy shows signs of addressing these criticisms in practice. Practitioners interviewed for the case studies below indicated an awareness of the challenges faced in the field. Current projects can and are taking steps to improve participatory practices by:
- Dedicating resources towards identifying people and groups who may traditionally be excluded in participatory practices, and working to include these people (e.g. by hosting live events in a given neighbourhood; collaborating with established community groups; communicating with media that is already familiar with a given group.
- When possible, hosting live meetings rather than working digitally.
- Allowing the boundaries of participation themselves to be a subject of participatory governance (e.g. allowing voters to decide how much budget will be dedicated to a participatory budgeting process);
- Tailoring budgets and project plans to account for experimentation and uncertainty.
- Being critical and realistic: asking questions like “To what extent is this process actually participatory?” and “What can we realistically achieve given our budget and mandate?”
Participatory budgeting in the city of Helsinki
Mobility Urban Values in Amsterdam
Participatory Budgeting in Madrid: Decide Madrid
Emerging Data Governance Models
There is a need to organize the growing amount of data about people and their environments. On the one hand, this organization can be considered in practical terms of interoperability: How can various datasets come together to compliment one another and be useful for a wide range of people? On the other hand, the organization and governance of data also creates new dilemmas and opportunities around the ownership, control, and value of that data. Emerging data governance models – data commons, data collaboratives, data cooperatives among them – are experimenting with disruptive new (and sometimes rediscovering old) ways of organizing, sharing, and governing data. One of the fundamental questions explored through emerging data governance models is: How can data be valuable for and controlled by those who actually create that data (such as citizens moving around their own city)?
The case studies below provide insight into the problems and opportunities encountered when challenging established (and often exploitative) modes of centralised data ownership and surveillance:
- Working with sensitive data, such as healthcare or personal location, requires a thoughtful merging of security and personal privacy concerns into secure, operable technology.
- Functioning, complete data trusts are difficult to develop. Currently, many data trusts exist as prototypes, or as initiatives which provide one (but not all) of the many necessary aspects of a data trust. Those aiming to implement a data commons should not do so alone. The path towards development is long and complex, and requires specialisation and coordination amongst a group of dedicated actors.
- In practice, novel approaches to data governance may encounter blurry legal boundaries. Corporate ownership may conflict with GDPR and other laws and rights related to privacy and personal data ownership. Many specific legal questions are yet to be fully resolved.
- The terminology surrounding data governance models can be unhelpful. There are various uses, understandings, and intentions behind terms like data commons, data cooperatives, and data collaboratives which often overlap, confuse, or conflict with one another. It is thus crucial to consider the ownership, options, and organization around that data in order to assess the actual workings of any particular data commons, collaborative, or cooperative. Mulgan and Straub of NESTA helpfully address this issue by opting for the term ‘data trust’ ‘to broadly denote institutions that work within the law to provide governance support for processing data and creating value in a trustworthy manner’.
The field of data governance appears to be in a moment of flux, ripe for experimentation as grassroots organisations, companies, cities, and others adopt novel practices in data governance which, in turn, provide specific new insights and questions to be explored. Examples of data trusts demonstrate their potential to give people more power over their own lives and information, and to give citizens more agency in their cities.
Useful resources to learn more about emerging data governance models include:
- Mozilla’s Data Futures: Research to shift power through data governance
- The new ecosystem of trust: How data trusts, collaboratives and coops can help govern data for the maximum public benefit, by Vincent Straub and Geoff Mulgan of NESTA
- NYU GovLab’s DataCollaboratives.org
- Waag’s Commons Lab
References
Mozilla (2020). Data Futures: Research to shift power through data governance. Accessed at https://foundation.mozilla.org/en/initiatives/data-futures/
Mulgan, Geoff & Straub, Vincent. (2019). The New Ecosystem of Trust: How data trusts, collaboratives and coops can help govern data for the maximum public benefit. Nesta.org. Accessed at https://www.nesta.org.uk/blog/new-ecosystem-trust/
NYU GovLab. Data Collaboratives: Creating public value by exchanging data. Access at https://datacollaboratives.org/
Health Data Commons
Driver’s Seat
Mobility Umbrellas
Mobility is a multidisciplinary subject. It is intertwined with urban planning, economics, geography, physics, democracy, and more. Because of this multidisciplinary nature, mobility projects are often situated between or within separate departments. In response to this internal fragmentation, many cities have taken (and are taking) steps in order to bring their mobility initiatives together under a unified group. These ‘mobility umbrellas’ bring together people, projects, and datasets so that mobility initiatives may have a more centralised home within municipalities to share findings and knowledge and increase interoperability, collaboration, and uptake between them.
When developing mobility umbrellas, governments tend to face challenges regarding interoperability; large technical and developmental workload; ‘WICKED’ problems involving a wide range of various factors requiring diverse expertise; maintenance of services built in technical development; and positioning the mobility umbrella within existing organisational structures.
Mobility umbrellas face a general challenge in that the problems they face are often immediate, tangible, and difficult to address while their benefits may be more general and long term. Potential benefits include better services (for example, maps with integrated datasets); less repetition, overlap, and duplication of efforts; and the opportunity to realign (mobility) decision-making processes, ideally to include citizens further by making projects participatory, by making data open and accessible and by making decisions transparent and participatory. Such benefits are more likely to occur when a department leverages the development of a mobility umbrella to create cultural changes which bring in new talent and simplify workflows.
Note that the term ‘mobility umbrella’ has been developed by the writing team in response to this apparent development. While not used outside of the context of this report, the term refers to a trend which likely applies to many similar initiatives in unidentified municipalities. In addition to the case studies below, related efforts that fit the description of this budding phenomenon include:
References
Gemeente Amsterdam. Slimme mobiliteit met MobiLab. Accessed at https://www.amsterdam.nl/wonen-leefomgeving/innovatie/smart-mobility/slimme-mobiliteit-mobilab/
Jätkäsaari Mobility Lab. https://mobilitylab.hel.fi/
The Sustainable Mobility Forum of Bilbao. https://pmus.bilbao.eus/
Smart CitySDK (Amsterdam)
Messina Municipality Data Collection/Exposure
Active Cities
Active cities aim to motivate citizens to move and interact in community-driven ways in public spaces. Sometimes this takes the form of mobility initiatives; in other cases active cities may promote green/environmentally friendly living, or serve to spur face-to-face interaction among citizens. Whatever the goal may be, active cities tend to involve a playful approach that reimagines new active uses for existing public space.
Active cities hold potential to be places where people come together as communities to drive meaningful change that is owned and directed by the citizens themselves. Achieving this, however, is neither easy, nor straightforward, nor quick, nor cheap, and can easily fall into the traps and dilemmas encountered by smart city initiatives more generally. Questions to help understand this dynamic include:
- To what extent does an initiative answer community needs from the bottom up, rather than prescribe solutions from the top down?
- Who owns the data gathered by these initiatives? How is it managed?
- To what extent is there surveillance, or other products of the initiative that are counter to human rights and shared values?
- In the end, who really benefits? Companies? Governments? Citizens? Which ones?
Co-creative approaches, shared ownership, flexibility and transparency of project plans can all help to keep active cities initiatives on track – to place citizens at the center of priority and decision making in their own cities.
Boston Beta Blocks
Lisbon E-Bike Initiative and Pop-Up Lanes
Disruptive Technology
AI/Algorithms in the Public Sector
Artificial Intelligence (AI) and algorithms are gaining in complexity. As their complexity rises, they become more difficult to assess and understand, becoming ‘black boxes’ in which it may be impossible for a human to know how the technology works, or how it came to a certain conclusion.
These ‘black boxes’ may be especially problematic when they are part of decision-making processes (in democratic governments, no less). Increasingly, however, AI and algorithms are making decisions that affect who has access to what, under which circumstances. Even in the most banal applications, this can have unanticipated consequences regarding power, citizens, and state.
The two case studies below indicate the wide range in which AI is applied by governments. Charging stations for electric vehicles may seem straightforward, but even here issues of priority, access, and descrimination arise as public decision-making processes (in this case, determining who has access to electronic vehicle charging stations, in which order, at which cost, and under which conditions) are increasingly automated. In contrast, gender violence is a horrific problem, and presents a more complex and pressing issue. The stakes are high, and errors in prediction models can result in a high cost in terms of violence, human suffering, and injustice. Both cases indicate the need for openness, transparency, and human oversight in the design and deployment of AI.
Generally, it seems that the power of a technology is proportional to its potential to both help and harm. Obvious questions surrounding AI are: Is it worth the risk? Under what circumstances can it (and can it not) be used? What measures keep AI in check? These questions should be explored further, not only by those who encounter and work with specific instances of AI, but also by citizens and leaders via deliberation in the public sphere.
VioGén 5.0
Public Stack for charging infrastructure
Ethical Guidelines for the use of AI and Algorithms
Contributors: Max Kortlander and Petra Biro
There are many existing guidelines for the use of AI and algorithms. For Urbanite partners and others working with AI in Europe, the EC Artificial Intelligence Strategy’s ‘Assessment List for Trustworthy AI’ (discussed in our case study) is fundamental to read and follow. The EC’s guidelines aim to ensure that the European Convention on Human Rights is applied to AI. In addition to GDPR, there are also other national, state, local guidelines and requirements which vary between governments but may nonetheless be necessary to consult when working with AI in Europe.
Other (often independent) guidelines can aid the ethical implementation of AI and algorithms. These guidelines vary in all sorts of ways: the level to which they are legally enforced, technical specificity, aims and goals, and applicability to various sectors and scenarios. As part of Waag’s Transparency Lab, Petra Biro contributed a summary and analysis of existing AI guidelines, almost all of which agree upon the principles of explainability, fairness, and accountability, but vary in their approach to ensuring these principles: (‘For example, AI4People focuses on more abstract principles, while Center for Democracy and Technology (CDT) offers concrete technical considerations’ (Biro). The organization Algorithm Watch has compiled a comprehensive AI Ethics Guidelines Global Inventory, containing a large set of AI ethics guidelines from various sources and in various sectors.
In addition to the EC’s Assessment List, other useful starting points for ethical AI development include: FATML’s Principles for Accountable Algorithms and Social Impact Statement for Algorithms; and AINow’s Algorithmic Impact Assessment.
The existence of so many non-binding guidelines and assessment tools is indicative of the problem that, for now, the burden of choosing and ensuring an ethical approach to AI lies on developers. This problem could be addressed through enforceable, accountable shared rules for the development and use of AI; and by programs which fund (human) resources to help public administrations, companies, and others to help reduce the burden of compliance. There is movement in this direction within the European Union and other democratic countries. A comprehensive approach should offer concrete technical options that adhere to fundamental human rights by design in order to uphold civil rights in light of AI’s further use and development.
References
Algorithm Watch. AI Ethics Guidelines Global Inventory. https://inventory.algorithmwatch.org/
Biro, Petra (2019). Algorithm Says No: Ethical guidelines for AI Systems. Waag. https://waag.org/en/article/algorithm-says-no-ethical-guidelines-ai-systems
Biro, Petra (2019). Analysis of ethical guidelines for AI systems. Waag. PDF accessed at https://waag.org/sites/waag/files/2019-12/Analysis%20of%20ethical%20guidelines%20for%20AI%20systems_Waag.pdf
Center for Democracy and Technology. https://cdt.org/
FATML. Principles for Accountable Algorithms and a Social Impact Statement for Algorithms. https://www.fatml.org/resources/principles-for-accountable-algorithms
Floridi, L., Cowls, J., Beltrametti, M. et al. AI4People—An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations. Minds & Machines 28, 689–707 (2018). https://doi.org/10.1007/s11023-018-9482-5
High-Level Expert Group on Artificial Intelligence set up by the European Commission (2020). The Assessment List for Trustworthy Artificial Intelligence (ALTAI). PDF accessed at https://ec.europa.eu/newsroom/dae/document.cfm?doc_id=68342
Meijer, A.J., Schaefer, M.T. & Branderhorst, Martiene (2019). Principes voor goed lokaal bestuur in de digitale samenleving - Een aanzet tot een normatief kader. Bestuurswetenschappen, 73 (4), (pp. 8-23) (16 p.). Accessed at http://dspace.library.uu.nl/handle/1874/389163
TADA. https://tada.city/