Branching Out: A Third Legislative Chamber for the AI Age

The proliferation and collection of large amounts of citizens’ data has led to the rise of "political machines" – AI systems used in government to make decisions around resource allocation. Political anthropologist Eduardo Albrecht argues that establishing a "Third House" – a new legislative body specifically designed to oversee the deployment and operation of political machines – could enable citizens to meaningfully engage with and oversee the AI tools used by their government.

Eduardo Albrecht

Read Bio

Listen to the AI-generated audio version of this piece. 

In an era where artificial intelligence increasingly mediates our interactions with government, the relationship between citizens and the state is being fundamentally rewired. The analogy is straightforward: just as self-driving cars rely on environmental data to navigate safely, "driverless government" depends on vast repositories of citizen data to function efficiently. The more data we produce about ourselves (knowingly or unknowingly, willingly or unwillingly) the more government functions can be automated. 

The proliferation and collection of large amounts of citizens’ data has led to the rise of “political machines” – AI systems used in government to make decisions around resource allocation. Political machines are replacing ever larger, and better paid, groups of human bureaucrats. We see this happening for example with the US Department of Homeland Security's Automated Targeting System, which analyzes travelers' data to determine their risk level, a task previously done by human agents. Similar systems are emerging across the globe, from Estonia's e-governance infrastructure to Singapore's Smart Nation initiative.

This shift parallels an accelerating fusion of our physical and digital identities. Government decisions taken based on the digital data traces we leave behind impact our actual physical selves. In turn, our physical selves produce more and more data points for the political machines to navigate. In the eyes of the state, the physical and the digital citizen are becoming melded together. 

As I argue in this piece, this circumstance is creating a new kind of state power that increasingly bypasses citizens' individual judgment. To counter this, I propose creating a "Third House" of government – a virtual, omnipresent vehicle through which citizens can meaningfully engage with and oversee the AI tools used by their government. This Third House would establish consensus around the ethical parameters guiding political machines and ensure citizens maintain control over their digital representation in governance systems.

What is at Stake?

The proliferation of political machines allows state authorities to enter into hitherto private spaces. The digital version of the citizen has opened a door into the mind of the physical citizen. It is now possible to use the data traces left behind by real people to purportedly surmise what they are thinking – and what is more, to know how best to influence that thinking in socially desired directions. 

Recent developments confirm these trends. Government funded initiatives have been exploring the use of AI for cognitive security, focusing on how automated systems might defend against "cognitive attacks" – essentially, attempts to manipulate human thinking – thereby allowing decision makers to nudge cognition back onto more favorable paths. 

For example, the Defense Advanced Research Projects Agency’s (DARPA) INCAS (Influence Campaign Awareness and Sensemaking) program, launched in 2020, specifically aimed to develop AI tools to identify and counter foreign influence operations that could manipulate public opinion. The program focused on creating automated systems that can detect cognitive attacks in the form of coordinated disinformation campaigns. DARPA describes it as developing "shareable and understandable evidence that facilitates decision making" about influence campaigns.

While INCAS had legitimate goals of protecting democratic processes from foreign interference, it naturally raises questions about who determines what constitutes "manipulation" versus legitimate persuasion or political discourse. This particular type of political machine must make judgments about what patterns of cognition indicate the presence of a foreign hostile influence versus normal political reflection.

Of course, there is the risk these systems are used to classify certain domestic viewpoints as illegitimate, but there is also a deeper danger. The most profound aspect of this transformation lies in the challenge posed to individual judgement. Throughout history, social control operated through abstract surveillance – whether through religious concepts of an omniscient God or nationalist ideologies that invoked the watchful eyes of "the people." In both cases, individual judgement served as the mediating force between official narratives and personal behavior.

Today's political machines fundamentally alter this arrangement. They replace abstract surveillance with actual monitoring of private cognition and behavior, reducing the role of individual judgment as a means of self-regulation. As I argue in my new book, Political Automation: An Introduction to AI in Government and Its Impact on Citizens, we therefore risk the emergence of a new kind of state power that relies less and less on citizens’ individual judgment to mediate behavior. In the long run, this will make us increasingly marginalized and excluded from decisions that determine how wealth, prestige, and power are distributed in a community.

A Third House of Government

To counter the rise of this new form of state power, I propose the creation of a "Third House" – a new legislative body specifically designed to oversee the deployment and operation of political machines. The Third House would not be a chamber like the Senate or House of Representatives, but a virtual, omnipresent vehicle active in the cloud and around the clock through which citizens can personally engage with the AI tools used by their government.

This Third House would focus on establishing consensus around the ethical parameters guiding political machines – particularly the crucial "if-then thresholds" that determine how people are categorized and treated by automated systems. For example, if certain conditions are present in a person's life, then that person belongs to category A. What constitutes these conditions, and what happens if you are, or are not, in category A? These are not technical questions to be decided by experts alone, but fundamental ethical questions that require broad democratic participation.

This proposal builds on existing initiatives like the Ada Lovelace Institute's Citizens' Biometrics Council in the UK, which brought together 60 citizens in 2020 to deliberate on the use of facial recognition and other biometric technologies. Similarly, the AI Civic Forum brings together citizens to deliberate on AI governance issues.

Taiwan's vTaiwan platform, which has been using digital tools to facilitate citizen deliberation on technology policy since 2015, offers another model. Under former Digital Minister Audrey Tang's leadership, Taiwan pioneered what they call "digital democracy," using technologies like Pol.is, a “real-time system for gathering, analyzing and understanding what large groups of people think,” to build consensus and enact legislation on complex regulatory issues.

Further progress toward a Third House will require a radical rethinking around government transparency. It must become mandatory – citizens deserve to know when automated systems make decisions about them and how these systems work. The EU's AI Act exemplifies this approach by imposing stricter controls on high-risk AI used by governments. Innovative approaches like Finland's AI Registry, which documents government AI systems and welcomes public input, also points toward possible solutions.

Potential Counterarguments and Responses

Critics might argue that a Third House would add unnecessary bureaucracy, slowing down government operations and innovation. However, the stakes of algorithmic governance are too high to prioritize efficiency over democratic accountability. The long-term costs of unaccountable AI systems in government – in terms of civil liberties and public trust – far outweigh the short-term benefits of unchecked deployment.

Others might contend that technical complexity makes meaningful citizen participation impossible. Yet examples like Taiwan's vTaiwan platform demonstrate that with proper design, complex technical issues can be made accessible to broader public deliberation. The Third House would not require every citizen to understand the intricacies of machine learning algorithms, but rather to engage with the fundamental ethical questions these systems raise.

We can see early experiments with this approach in platforms like Decidim, developed in Barcelona and now used by cities like Helsinki, New York, and Mexico City to facilitate citizen participation in policymaking. Similarly, Madrid's Decide Madrid platform and Paris's Budget Participatif allow citizens to propose and vote on projects for their cities. At the regional level, the European Commission's Conference on the Future of Europe in 2021-2022 used digital platforms to gather input from citizens across the continent. While still limited in scope and concerning general public policy, these initiatives point toward the possibility of more direct forms of participation even on complex technical matters.

Some may also argue that existing regulatory frameworks are sufficient. However, as historian Yuval Noah Harari warns in his October 2024 Financial Times article "Beware the AI Bureaucrats," traditional oversight mechanisms are increasingly inadequate for addressing the unique challenges posed by algorithmic governance systems. Harari emphasizes that the real danger of AI is not killer robots but the "automated plumbers of the information network" that could fundamentally reshape how societies function and potentially undermine democratic principles if left without proper oversight.

Finally, there's the question of feasibility – can such a radical institutional innovation actually be implemented? While challenging, history shows that democratic institutions evolve in response to technological change. Just as the industrial revolution spurred new forms of democratic representation, the AI revolution necessitates new democratic adaptations. It may not seem feasible now, but fast forward a few decades and it will seem inevitable in hindsight.

Digital Citizens as our Emissaries

Despite the various citizen initiatives described above, the sheer speed and ubiquity of political machines make it impossible for citizens to engage with them directly. In Political Automation, I argue we will need digital counterparts – AI augmented versions of ourselves – to bear the brunt of engaging with political machines, and with millions of other AI augmented “digital citizens” that are doing the same. 

This concept is already beginning to materialize through the rise of AI agents. These models can be fine-tuned on personal data to accurately represent individual preferences and values. Zoom CEO Eric Yuan recently predicted that people will soon have their own "personal AI digital twin to attend meetings and write emails for them," suggesting how close we are to this reality.

Microsoft's recent introduction of Copilot personal agents across their product suite and Meta's development of AI assistants for their social platforms further demonstrate how digital representatives are becoming normalized. The Stanford Human-Centered Artificial Intelligence (HAI) institute's 2024 AI Index Report documents the rapid proliferation of these systems across all sectors of society. Politics will not remain immune. 

The humanitarian sector represents fertile ground for digital citizenship applications. Today, organizations typically operate on assumptions about aid recipients' needs, often missing crucial insights about what beneficiaries truly value. Digital citizens representing diverse populations could bridge this gap and help identify genuine beneficiary needs, creating mutual benefits for both aid recipients and the government, international, and nonprofit organizations serving them. This approach becomes especially valuable in contexts where AI-driven political machines already influence resource allocation and program design.

Of course, real citizens must maintain ownership and command over their digital counterparts. The relationship must be one where the real citizen "drives" their digital counterpart – much like the owner of a self-driving car can set the direction or take control when necessary.

However, to ensure this ownership, two new fundamental rights are needed: the right of access and the right to freedom of thought. The right of access would guarantee citizens not only immediate and comprehensive knowledge of what data the government and its private partners have collected about them, but also what data the digital citizen is using to animate its decision making. This builds on but goes significantly beyond existing frameworks like the EU's General Data Protection Regulation or California's Consumer Privacy Act

Some promising initiatives in this direction include the Finnish government's Aurora AI program, which aims to provide citizens with transparent access to and control over their data across government services. Similarly, Estonia's X-Road system provides citizens with visibility into which agencies have accessed their data and for what purpose.

Even more fundamental is the right to freedom of thought. As UN Special Rapporteur Ahmed Shaheed noted in his 2021 statement, freedom of thought is "yet a largely unexplored right" that is "foundational for many other rights and it can be neither restricted nor derogated from, even during public emergencies." The UN Human Rights Council's 2023 resolution on "Neurotechnology and Human Rights" further highlighted the urgent need to protect cognitive liberty as brain-computer interfaces advance.

Organizations like the Neurorights Foundation, established by Columbia University neuroscientist Rafael Yuste, are already advocating for legal protections against technologies that could manipulate or surveil neural activity. Chile became the first country to amend its constitution to protect "neurorights" in 2021, and Spain introduced a Digital Rights Charter that includes protections for cognitive liberty.

The challenge, of course, is ensuring this direct participation preserves the authenticity of individual thinking, especially once AI technologies are thrown in the mix to extend citizens’ reach through their digital emissaries. In the book I argue this is possible. As the Dutch designer Fabian Hijlkema explains, the more we trust machines to make everyday decisions, the more we will see a separation between the "act" and "art" of politics. With machines handling the heavy lifting around details, humans are left to ponder broader philosophical questions of a higher order.

Conclusion: The Path Forward

If we do nothing, the threat of what John Danaher calls "algocracy" – government by algorithms – is real. Yet there are reasons for cautious optimism. The UN Secretary-General's AI Advisory Body, established in 2023, represents a global effort to develop governance frameworks for AI. The Advisory Body's report emphasizes the importance of public participation in AI governance. Other important initiatives include the OECD's AI Policy Observatory, which tracks AI policies across 60 countries, and the Global Partnership on AI (GPAI), which brings together 29 countries and the EU to guide the responsible development of AI – though all of these initiatives stop short of proposing the kind of radical institutional innovation that may be necessary.

We cannot escape the role of political machines in our societies, they are far too convenient to give up, but we can choose to harness them as a community. If the future of the polity lies in the power of these machines to think well, then it stands to reason that everyone in that polity should participate in making sure that thinking well is indeed what they are doing.

Creating a Third House, or accustoming people to use their digital citizens, will not be easy. It may take decades, as did the emergence of other democratic institutions throughout history. But the alternative – allowing political machines to operate without meaningful public oversight – will undermine the very foundations of democratic citizenship by slowly eroding the role of our individual judgment in managing the public sphere.

The conversation about how to govern AI in public life is just beginning. My book seeks to provide a bold new framework for ensuring this conversation leads to institutions that enhance rather than diminish human judgment in the age of thinking machines.

Eduardo Albrecht is an Associate Professor at Mercy University, Adjunct Associate Professor at Columbia University and City University of New York, Senior Fellow at United Nations University Centre for Policy Research, and author of Political Automation: An Introduction to AI in Government and Its Impact on Citizens (Oxford University Press, 2025).

Photo Credit: Nicholas Albrecht

 

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.