News That Caught Our Eye #55

Published on by Dane Gambrell & Angelique Casem on April 23, 2025

In the news this week: Anirudh Dinesh documents how India is quietly rewriting the playbook for public service by training millions of officials in AI, policy, and more with the goal of delivering more citizen-centric and future-ready public services.A collection of blog posts from Reboot Democracy explores Brazil's pioneering democratic innovations and how AI could be used to further improve, expand, and deepen engagement with citizens. The United Arab Emirates plans to use AI to aid in writing, amending, and tracking the impact of legislation; officials say that technology could make the lawmaking process faster and more precise, while skeptics question whether existing tools can deliver on the UAE’s ambitions. Read more in this week's AI News That Caught Our Eye.


In the news this week

AI for Governance

AI for Governance

UAE set to use AI to write laws in world first

Chloe Cornish on April 20, 2025 in Financial Times

“The United Arab Emirates aims to use artificial intelligence to help write new legislation and review and amend existing laws, in the Gulf state’s most radical attempt to harness a technology into which it has poured billions. Other governments are trying to use the technology to become more efficient, from summarising bills to improving public service delivery, but not to actively suggest changes to current laws by crunching government and legal data. ‘This new legislative system, powered by artificial intelligence, will change how we create laws, making the process faster and more precise,’ said Sheikh Mohammad bin Rashid Al Maktoum, the Dubai ruler and UAE vice-president, quoted by state media. The UAE plans to use AI to track how laws affect the country’s population and economy by creating a massive database of federal and local laws, together with public sector data such as court judgments and government services. The AI would ‘regularly suggest updates to our legislation’, Sheikh Mohammad said, according to state media. The government expects AI to speed up lawmaking by 70 per cent, according to the cabinet meeting readout. But researchers noted it could face many challenges and pitfalls. Those range from the AI becoming inscrutable to its users, to biases caused by its training data and questions over whether the technology even interprets laws in the same way humans do. It is unclear which AI system the government will use, and experts said it may need to combine more than one.”

Read article

AI for Governance

Artificial Intelligence, the Right to a Fair Trial & the "AI-Equipped-Judges" of the Future

Vytautas Mizaras et al. on April 17, 2025 in Cambridge Handbook of AI and Technologies in Courts

“This article challenges the prevailing concern that AI might replace judges by presenting a more balanced vision of ‘AI-equipped judges.’ Rather than displacing the pivotal role of human adjudication, AI tools should be used to enhance and augment judicial decision-making by providing powerful tools that complement, rather than supersede, the expertise and discretion of the bench. By situating technology alongside human judgment, we introduce a new concept - ‘judge-in-the-loop,’ - wherein AI systems operate as indispensable allies, helping judges manage voluminous caseloads, navigate complex legal questions, and ensure consistent, evidence-based rulings. In this article, we explore procedural, behavioral and ethical implications of integrating AI into the courtroom. We examine potential transformations in evidence collection, case management, and procedural fairness, emphasizing the necessity of transparent algorithms and safeguards to protect due process rights. Equally important is maintaining public trust: even the most advanced AI systems require ongoing oversight and human interpretive skills to guarantee that litigants perceive both the technology and the judge’s role as fair and reliable. Our final section looks ahead, exploring how AI can help judges exceed their traditional capabilities and achieve performance standards previously regarded as impossible.”

Read article

AI for Governance

State Bar of California admits it used AI to develop exam questions, triggering new furor

Jenny Jarvie on April 23, 2025 in Los Angeles Times

“Nearly two months after hundreds of prospective California lawyers complained that their bar exams were plagued with technical problems and irregularities, the state’s legal licensing body has caused fresh outrage by admitting that some multiple-choice questions were developed with the aid of artificial intelligence. The State Bar of California said in a news release Monday that it will ask the California Supreme Court to adjust test scores for those who took its February bar exam. But it declined to acknowledge significant problems with its multiple-choice questions — even as it revealed that a subset of questions were recycled from a first-year law student exam, while others were developed with the assistance of AI by ACS Ventures, the State Bar’s independent psychometrician. ‘The debacle that was the February 2025 bar exam is worse than we imagined’ said Mary Basick, assistant dean of academic skills at UC Irvine Law School. ‘I’m almost speechless. Having the questions drafted by non-lawyers using artificial intelligence is just unbelievable.’”

Read article

AI for Governance

“If Everyone Understands AI, They’ll Find a Way to Use It”: How India Is Building a Future-Ready Civil Service

Anirudh Dinesh, on April 20, 2025 in Reboot Democracy

The Government of India's learning platform, iGOT, has dramatically scaled up with 9 million users and 32 million course enrollments boasting a 70% completion rate. Adil Zainulbhai, chair of India's Capacity Building Commission, leads this initiative to transform the civil service from a rules-based British model to a citizen-centric workforce. The program offers 2,000+ free courses on everything from AI to yoga, addressing competency gaps across all levels of government. Its success stems from short, hour-long modules based on employee-requested topics, strong support from Prime Minister Modi, and strategic buzz-building through targeted pilots. With 1.3 million civil servants already taking AI courses, the goal is universal AI literacy in public service by 2025, enabling workers across sectors to apply these technologies to improve citizen services.

Read article

AI and Public Engagement

AI and Public Engagement

FROM CITIZEN TO SENATOR: Artificial Intelligence and the Reinvention of Citizen Lawmaking in Brazil

Beth Simone Noveck, Alisson Bruno Dias De Queiroz, José Luis Martí, Cristiano Ferri, Luis Kimaid, and Dane Gambrell on April 23, 2025 in Reboot Democracy

A recent Reboot Democracy series explored how Brazil’s Federal Senate is using AI and citizen engagement to improve the lawmaking process and the opportunities and limitations of these approaches. We’ve compiled all of these posts into a collection of essays exploring Brazil's pioneering democratic innovations and how AI could be used to further improve, expand, and deepen engagement with citizens. The collection features three essays: the first describes the Senate’s citizen participation mechanisms and how AI could improvement engagement; ; the second, by the former director of the Hacker Lab in the Chamber of Deputies offers practical insights on AI's potential for deliberative democracy; and the third contextualizes Brazil's experiments within essential pillars of democratic innovation.

Read article

AI and Problem Solving

AI and Problem Solving

AI adoption in crowdsourcing

John Michael Maxel Okoche, Marcia Mkansi, Godfrey Mugurusi, and Wellington Chakuzira on April 11, 2025 in ACM Digital Library

“Despite significant technology advances especially in AI, crowdsourcing platforms still struggle with issues such as data overload and data quality problems, which hinder their full potential. This study addresses a critical gap in the literature on how the integration of AI technologies in crowdsourcing could help overcome some of these challenges. Using a systematic literature review of 77 journal papers, we identify the key limitations of current crowdsourcing platforms that include issues of quality control, scalability, bias, and privacy. Our research highlights how different forms of AI including machine learning (ML), deep learning (DL), natural language processing (NLP), automatic speech recognition (ASR), and natural language generation techniques (NLG) can address the challenges most crowdsourcing platforms face. This paper offers knowledge to support the integration of AI first by identifying types of crowdsourcing applications, their challenges and the solutions AI offers for improvement of crowdsourcing.”

Read article

AI and Problem Solving

New Commons
Challenge

Open Data Policy Lab on April 23, 2025 in Open Data Policy Lab

The Open Data Policy Lab has launched The New Commons Challenge, inviting innovators worldwide to propose data commons for generative AI that serves public interests by enhancing data quality and diversity. With submissions open from April 14 to June 2, 2025, the challenge seeks projects that improve localized decision-making or strengthen humanitarian disaster response capabilities. Two winning proposals will each receive $100,000 in funding plus mentorship, technical support, and access to expert networks.

Read article

AI and Labor

AI and Labor

Artificial Intelligence and the Future of Work

National Academies of Sciences, Engineering, and Medicine et al. on April 23, 2025 in National Academies Press

“Advances in artificial intelligence promise to improve productivity significantly, but there are many questions about how AI could affect jobs and workers. Recent technical innovations have driven the rapid development of generative AI systems, which produce text, images, or other content based on user requests - advances which have the potential to complement or replace human labor in specific tasks, and to reshape demand for certain types of expertise in the labor market. Artificial Intelligence and the Future of Work evaluates recent advances in AI technology and their implications for economic productivity, the workforce, and education in the United States. The report notes that AI is a tool with the potential to enhance human labor and create new forms of valuable work - but this is not an inevitable outcome. Tracking progress in AI and its impacts on the workforce will be critical to helping inform and equip workers and policymakers to flexibly respond to AI developments.”

Read article

AI and Labor

Whispering Progress: Fear of Automation and Voluntary Disclosure

Jun Oh and Guoman She on April 22, 2025 in Hong Kong University Business School

“The paper provides evidence that firms tailor their disclosure policies to achieve the objectives of task automation and workforce stability. Using local cable news transcripts to measure the fear of job displacement due to automation, we find that firms reduce public disclosures about their automation strategies when automation fear intensifies. The diminished disclosure is more pronounced in industries with occupations more susceptible to displacement and when unfavorable employee reactions are more likely. For identification, we exploit two quasi-natural experiments—layoffs by local high-tech firms and the introduction of ChatGPT. We also find suggestive evidence that firms increase private communication with investors to compensate for the reduction in public information provision. Overall, the findings shed light on the trade-offs between maintaining transparency and mitigating adverse employee responses in the era of rapid advancement in automation technologies.”

Read article

AI and Public Safety

AI and Public Safety

AI is Making Cybercrime Easier, Faster, and Scarier Than Ever.

Grant Harvey on April 22, 2025 in The Neuron

This article recaps three recent reports about how scammers are using AI tools to create more sophisticated and convincing fraud: Scammers are using AI to create more convincingAI-powered personalized phishing schemes, fake businesses with deepfake executives, and "package hallucinations" targeting developers with non-existent software packages that criminals later register with malicious code. The threat is growing because AI tools are increasingly accessible and criminals innovate quickly. For individuals, the best protection remains vigilance: verify everything, guard personal data, recognize red flags, enable security features, and report suspicious activity.

Read article

AI and Education

AI and Education

Rethinking School in the Age of AI

Tristan Harris, Daniel Barcay, Rebecca Winthrop, and Maryanne Wolf on April 21, 2025 in Center for Humane Technology

In this "Your Undivided Attention" podcast episode, hosts Tristan Harris and Daniel Barcay speak with experts Maryanne Wolf and Rebecca Winthrop about AI's disruption of education. They explore how tools like ChatGPT undermine traditional assessment by instantly solving assignments, raising fundamental questions about education's purpose. The experts distinguish between harmful "cognitive offloading" and necessary effort-based learning that builds critical thinking. They identify four student engagement modes—passenger, achiever, resistor, and explorer—with explorer mode being ideal but rare. The discussion emphasizes that technology must be thoughtfully integrated rather than blindly adopted. While AI offers benefits like personalized tutoring, the experts warn against diminishing crucial developmental processes, especially in younger learners.

Read article

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.