Loading blog post, please wait Loading blog post...

Listen to the AI-generated audio version of this piece.

00:00
00:00

Accountable Algorithms: Blending Individual Rights and Collective Oversight in Government AI

In Plainfield, New Jersey, a predictive policing system generated more than 23,000 crime forecasts over the course of a year. According to the Markup, fewer than 100 of those PredPol predictions matched crimes that were actually reported. That’s an accuracy rate under half a percent. In practice, it meant sending officers into neighborhoods, potentially with guns drawn, on the basis of guesses that were wrong more than 99 times out of 100. 

Rules on transparency and redress that blend individual and collective action complement earlier guidance on classifying risk.

Across the Atlantic, the United Kingdom’s welfare ministry deployed an automated system to flag housing benefit applications for possible fraud or error. From 2021-2024, it marked more than 200,000 claims as high-risk. Subsequent reviews showed that roughly two-thirds of those claims were in fact legitimate, leaving tens of thousands of households subjected to unnecessary investigations and, in some cases, delays in receiving essential benefits while they proved their eligibility.

Some governments have tried to meet these risks by drawing bright lines around the riskiest uses. The EU’s data-protection law gives people the right not to be subject to fully automated decisions about their rights or benefits, and countries like Kenya, South Africa, and Italy also bar public agencies from relying on automation alone for consequential decisions. The EU’s AI Act adds another layer by classifying systems according to levels of risk. 

However, most uses of AI, sit in a messy middle: they are benign and extraordinarily useful pattern recognition systems that make guesses based on past data. New Jersey, for example, New Jersey and other states have been modernizing call centers with the help of some AI. But in NJ AI does not answer the phone; people do. People decide eligibility. Instead, call-center staff use AI to search for information faster. The result has been a dramatic drop in wait times—from over 30 minutes in some agencies to under two minutes. 

From hurricane prediction to mail sorting, we are using AI in ways that improve governance. Yet we require mechanisms to ensure that these uses remain both fair and effective. 

Effective guidance requires practical rules that pair individual rights with collective oversight. 

Individuals must be able to know when automation affects them and correct mistakes. But, as with climate change or public health, we need to go beyond shifting responsibility for oversight and accountability to individuals. We also need mechanisms for collective scrutiny that allow journalists, researchers, civil society, and oversight bodies to evaluate whether automated tools are fair, accurate, and lawful. Blending individual and collective action may offer a way to ensure less burdensome, more flexible and durable oversight.

Transparency: Making Automated Systems Visible to All of US

Automated systems, whether for decisionmaking or information processing, introduce new points of opacity. The government has long operated “behind closed doors,” leading to the introduction of sunshine requirements in the 1970s. Today, the challenge is different. Automated processes can be invisible not only to the people they affect, but also to oversight bodies, the wider public, and even to the government employees who use them without fully understanding how they work.

A new set of approaches aims to make these systems visible, if not fully explainable. 

But these transparency requirements fall along a spectrum. Some give individuals the right to know when an algorithm has influenced a decision about them. Others take a more collective approach, requiring agencies to catalog their automated systems in public registers.

These transparency requirements fall along a spectrum. Some give individuals the right to know when an algorithm has influenced a decision about them. Others take a more collective approach, requiring agencies to catalog their automated systems in public registers.

France  requires agencies to notify individuals when an algorithm has helped produce an administrative decision about them and, upon request, explain the system’s main rules and data. The 2016 Law for a Digital Republic states that “an individual decision made on the basis of algorithmic processing must include an explicit statement informing the person concerned.” 

In 2025, Spain’s Supreme Court took a major step beyond individual notice by recognizing algorithmic transparency as a constitutional principle grounded in the public’s right of access to government information. In its landmark ruling on the BOSCO benefits-eligibility algorithm, the Court held that civil society organizations — not just the individuals affected — must be able to examine how government algorithms function. To make that oversight meaningful, the Court required the government to disclose the system’s core logic, technical documentation, test results, and, where necessary, the underlying source code under confidentiality safeguards. By framing algorithmic transparency as essential to democratic accountability and “digital democracy,” the Court shifted the focus from case-by-case notification to collective oversight, enabling journalists, researchers, and civic groups to scrutinize automated systems at a structural level.

The Netherlands and the City of Helsinki have taken a related path, inviting agencies to log their automated systems in public algorithm registers that describe what a tool does, what data it uses, and what risks it poses. Unlike the EU AI Act, which mandates documentation only for high-risk systems, these registers provide a more comprehensive public inventory of all algorithmic uses across government for the benefit of  oversight bodies, journalists, and residents.

Canada and Chile are moving in a similar direction: Canada’s Directive on Automated Decision-Making requires agencies to publish algorithmic impact assessments and detailed system information on the Open Government Portal, while Chile’s new data-rights framework obliges agencies to disclose the automated tools they use as part of ordinary administrative-transparency requirements. Together, these approaches aim to make automated systems visible at scale, enabling oversight bodies, journalists, and residents to understand where algorithms are operating across government.

New York State recently moved toward an algorithm register. Legislation enacted in early 2025 requires state agencies to disclose their use of automated tools in employment-related decisions and directs its Office of Information Technology Services to maintain a statewide inventory documenting those uses, representing one of the first binding requirements in the United States for agencies to catalogue their algorithmic systems in a centralized public resource.

Redress: Giving Individuals and Institutions Power to Challenge Automated Decisions

Transparency creates visibility, but redress determines power. In some jurisdictions, only the individual affected can challenge an automated decision, placing the burden on the person who experienced the error. In others, a broader set of actors — including civil society organizations and oversight bodies — can challenge the system itself. 

For example, the federal Transparent Automated Governance (TAG) Act, introduced in Congress in 2023, would require agencies to disclose when automated systems influence “critical decisions” and to provide individuals with “timely, effective, and easy-to-access” ways to appeal those decisions. This approach locates accountability squarely with the individual who has already been affected by an automated decision. 

Colorado’s new AI Act allows individuals to appeal decisions made by “high-risk AI systems”-- defined as those that shape or determine consequential decisions in areas like housing, employment, credit, education, or access to public services. The law states that individuals must be given “an opportunity to appeal an adverse consequential decision concerning the consumer arising from the deployment of a high-risk artificial intelligence system, whose appeal must, if technically feasible, allow for human review.”

This model is important—but incomplete. By contrast, New Jersey’s recent guidance on algorithmic discrimination explicitly allows organizations to bring complaints on behalf of affected groups, recognizing that discrimination is often systemic and not always visible one case at a time. Arkansas considered a similar approach in the healthcare sector, introducing a bill that would have required public reporting on algorithm performance, though it ultimately did not advance.

The question is not whether individuals should have rights of notice and appeal—they should—but how to complement those rights with mechanisms that allow for system-level oversight, collective challenge, and proactive review of automated systems that recognize the need for collective action.

Other countries are beginning to recognize the need for broader and more practical forms of oversight such as public algorithm registers, organizational standing, or proactive audit authority so that accountability does not depend solely on whether an individual happens to spot an error in their own file.

A Path Forward: Turning Principles into Practical Rules

The challenge ahead is not identifying the problems with automated decisionmaking. We already know them: tools that exaggerate past bias, systems that treat thousands of innocent people as suspects, and models that operate in ways even their users don’t fully understand. The real task is translating global lessons into guidance that agencies can actually follow that promote positive outcomes while limiting risks.

Where rights, benefits, or legal status are at stake, people must remain accountable for the final decision. Models like the EU, Kenya, South Africa, and Italy give us clear guidance as to the language we need. Under GDPR, individuals in the EU “have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her.” Kenya and South Africa have incorporated the EU’s language into their own data protection acts. For public administrators, Italy’s law states that AI may be used “to support the administrative activity, respecting the autonomy and decision-making power of the individual, who remains solely responsible for the measures and procedures in which artificial intelligence has been used.”

But these prohibitions are only a starting point. Most automation in government occupies a vast middle space between the trivial and the high-stakes. To govern that space, policymakers must give agencies something more than aspirational statements about “trustworthy AI.” They need rules that translate values into operations: when to disclose, when to document, when to allow automation, when to forbid it, and when to invite outside scrutiny. Agencies need clear definitions and approaches that:

  1. Adopt Transparent Algorithmic Registers - Couple individual notices for case-level decisions with public algorithm registers for systems that shape decisions program-wide.

  2. Couple Individual with Collection Action - Expand redress and accountability so that errors can be corrected one case at a time, and systemic failures can be identified across cases. As Spain’s courts have recognized, the legitimacy of algorithmic decisionmaking depends not just on the rights of the individual, but on the public’s ability to understand, scrutinize, and contest the systems that govern their lives.

  3. Upskill Public Servants - Training is essential to create critical users and purchasers inside and outside of government who can evaluate the effectiveness as well as the risk of AI tools on an ongoing basis. What starts out as a benign tool with a “human in the loop” might become a de facto automated system when people stop paying attention. 

Governments everywhere are under pressure to adopt automated systems that promise efficiency, consistency, and better service delivery. But without practical rules for how these tools should be built, bought, and used, they also risk amplifying old biases, generating new forms of error, and eroding trust. As the global examples show, bright-line prohibitions on fully automated decisions are necessary, but far from sufficient. Most real-world uses of AI fall in the broad middle space where systems assist humans rather than replace them and where the risks are harder to define, but no less important to govern.

Effective oversight requires a blend of individual rights and collective mechanisms. People need to know when automation affects them and have simple ways to correct mistakes in their own cases. But accountability also needs to operate above the case level, through public algorithm registers, organizational standing, and proactive review that allow civil society, journalists, and oversight bodies as well as individuals to examine whole systems rather than isolated errors, offering a more systemic, democratic model of algorithmic accountability.

Tags