Loading blog post, please wait Loading blog post...

Listen to the AI-generated audio version of this piece.

00:00
00:00

Research Radar: Co-Designing AI Systems 

Imagine Clara, a working mother in San Francisco, faced with deciphering her six-year-old daughter María's Individualized Education Program (IEP). It's fifty pages of educational jargon in English—a language she's not comfortable with. The document that should empower her to advocate for her daughter with dyslexia instead becomes an insurmountable barrier.

Now imagine a different approach: an AI tool designed with Clara and hundreds of families like hers, not merely for them. That's precisely what the Burnes Center's AIEP project is doing—engaging parent leaders to actively shape an AI platform that translates IEPs, explains jargon, and provides advocacy guidance.

This real-world initiative illustrates the kind of participatory approach that researchers Sachit Mahajan and Dirk Helbing from ETH Zurich theorize in their new framework.

The paper: Mahajan, Sachit & Helbing, Dirk. "Co-Designing AI Systems with Value-Sensitive Citizen Science." ETH Zurich, April 2025.

Key contribution: A systematic framework called "Value-Sensitive Citizen Science" (VSCS) that combines Value Sensitive Design principles with citizen science methods to foster meaningful public participation in AI development.

Core components: The framework proposes three interconnected phases:

  1. Value Discovery: Using participatory methods to identify and document community values
  2. Technical Translation: Converting these values into concrete AI system features
  3. Governance Integration: Establishing mechanisms for ongoing oversight and accountability

To structure participation, they introduce a six stage process: Awareness → Reflection → Translation → Negotiation → Operationalization → Evolution.

Comment: What's refreshing about this paper is its attempt to move beyond identifying problems to propose specific processes for addressing them. The authors explicitly acknowledge the gap between token participation and meaningful co-creation in AI development.The paper tackles power dynamics head-on, proposing structural mechanisms like community veto rights that could shift authority toward communities.

However, as a theoretical framework, it lacks (intentionally) concrete implementation details and doesn't adequately resolve tensions between technical expertise and meaningful participation. There's a persistent "idealism vs. reality gap" – the framework assumes institutional willingness to share power that rarely exists in commercial contexts.

The economic viability of such approaches in profit-driven environments remains questionable. AIEP is philanthropically-funded for this reason. Building community-centric AI requires developers committed to embedding citizen science principles from the start.

We need to advance the conversation about community participation and codesign  within existing economic realities where development cycles are rapid and profit incentives override community concerns.

That said, as AI increasingly shapes critical social systems, frameworks like VSCS become essential. The AIEP project demonstrates that community-centered AI development is possible, including at scale. The question remains: will mainstream AI developers adopt these approaches?


Who gets to decide how these technologies are designed and deployed? When we center the communities most affected by these decisions, we build not just better technology, but a more just society.

Tags