AI and Elections
The Role of GenAI During the 'Super Election Year' 2024 - TUM Think Tank at the Munich School of Politics and Public Policy, Dr. Amélie Hennemann-Heldt, February 19, 2025
“A new report by Dr. Amélie Hennemann-Heldt examines the nuanced impact of generative AI in elections, highlighting both risks and opportunities. While concerns over AI-driven interference were high in 2024, real-world events suggest a more balanced reality. GenAI can amplify disinformation but also enhance political engagement and accessibility. Regulatory gaps remain, focusing more on risks than ethical applications…. Industry collaboration and clear standards can be key to addressing immediate challenges while moderation models can aid in curbing AI-driven deception while preserving (online) space for political discourse. In terms of social media platforms, aligning policies on genAI in political campaigns and implementing rapid-response mechanisms for AI-related incidents alongside existing measures can boost trust and safety.”
Governing AI
Clock is Ticking for Responses to UK Government Consultation on Copyright and Artificial Intelligence - The National Law Review, Carlton Daniel, Mike Llewellyn, Paul Jinks of Squire Patton Boggs, February 16, 2025
“The application of UK copyright law for the purpose of AI training is disputed, leading to inevitable high-profile tension between rights holders keen to control and be paid for use of their work, and developers who argue that this legal uncertainty is undermining investment in and development of AI in the UK. Whilst cases are making their way through the courts in the UK, there have been frequent calls for specific legislation. The UK government has launched a consultation open until 25 February 2025 inviting interested parties to submit feedback on potential changes to UK copyright legislation in light of AI. The options set out in the consultation, and on which feedback is sought, range from doing nothing, through to the introduction of broad data mining rights which would allow use of copyright works for AI training (including for commercial use), without rights holders’ permission and subject to few or no restrictions.”
Oregon AI Advisory Council Unveils Its AI Action Plan - Government Technology, News Staff, February 14, 2025
“The Oregon State Government Artificial Intelligence Advisory Council has released its final action plan with 74 recommendations to guide government use of AI. The plan recommends executive actions for the state to develop frameworks for AI governance and security as well as to address privacy and workforce needs. The principles include accountability, equity and representation, human oversight in AI governance, privacy and confidentiality, workforce preparedness and understanding, and more…. Recommendations in the action plan include executing an updated executive order that authorizes an AI governance body and appointing an AI leadership role within six months and developing metrics to be measured and publicly reported within 18 months. The report’s concluding summary notes that the state’s AI and privacy leadership ‘should plan to provide a progress report to support the 2027-29 budget development process.’”
Three Fallacies: Alondra Nelson's Remarks at the Elysée Palace on the Occasion of the AI Action Summit - Tech Policy, Alondra Nelson, February 14, 2025
At the Paris AI Action Summit Alondra Nelson argued against the notion that AI's sole purpose is efficiency and scale, emphasizing its potential to benefit humanity and to improve lives. Nelson also challenged the false tradeoff narrative that pits safety against progress, advocating for thoughtful governance and collaboration to drive responsible innovation. Finally, she rejected the idea that positive outcomes from AI are inevitable, stressing the need for active stewardship, public involvement, and human-centered leadership. Nelson's speech underscored that AI's development should prioritize human rights, expand opportunity, and strengthen democracy.
South Korea removes Deepseek from app stores over privacy concerns - BBC, João da Silva and Jean Mackenzie, February 16, 2025
“South Korea has banned new downloads of China's DeepSeek artificial intelligence (AI) chatbot, according to the country's personal data protection watchdog. The government agency said the AI model will become available again to South Korean users when ‘improvements and remedies’ are made to ensure it complies with the country's personal data protection laws.
In the week after it made global headlines, DeepSeek became hugely popular in South Korea, leaping to the top of app stores with over a million weekly users. But its rise in popularity also attracted scrutiny from countries around the world which have imposed restrictions on the app over privacy and national security concerns. It came after several South Korean government agencies banned their employees from downloading the chatbot to their work devices. South Korea's acting president, Choi Sang-mok, has described Deepseek as a ‘shock’ that could impact the country's industries, beyond AI.”
AI for Governance
Kerala HC rejects plea to review concept of brain death, uses ChatGPT for research - The News Minute, Jisha Surya, February 14, 2025
“The Kerala High Court – a court in the state of Kerala, India – dismissed a writ petition seeking to review the concept of brain death or brain stem death on February 10, a topic that put the medical community in the state at the centre of a raging controversy for nearly a decade. With the order, the bench of Justices A Muhamed Mustaque and P Krishna Kumar put an end to the legal battle on the medical and ethical questions raised in declaring a patient 'brain dead'. Before arriving at the final conclusion, the court chose to check the brain death policies of various countries by putting up a query on ChatGPT, the generative AI chatbot…To analyse the global policies, the court took the aid of ChatGPT. While the prompt given by the court is not clear in the order, it said that ChatGPT provided it with the prevalent policies followed regarding brain death in some of the countries....”
Global study: AI-driven government productivity efforts can’t underestimate culture - Business Review, Simona Hrincescu, February 17, 2025
“A new Economist Impact report examined the opportunities and obstacles of public sector productivity reform. The report found governments are realizing significant gains from investments in e-government, data-driven services, and AI, but these alone are not enough to deliver change. The survey reveals that adaptive organizational design and digital transformation are the most important strategies to boost productivity – and nearly equally so. Additionally, the survey indicates that agencies that have embarked on digital transformation were more likely to have successfully implemented organizational reform.” The report, funded by analytics firm SAS, is available for download here.
UK drops ‘safety’ from its AI body, now called AI Security Institute, inks MOU with Anthropic - TechCrunch, Ingrid Lunden, February 13, 2025
“The U.K. government is making a pivot into boosting its economy and industry with AI, and as part of that, it’s pivoting an institution that it founded a little over a year ago for a very different purpose. The Department of Science, Industry and Technology announced that it would be renaming the AI Safety Institute to the ‘AI Security Institute’. With that, the body will shift from primarily exploring areas like existential risk and bias in large language models, to a focus on cybersecurity, specifically ‘strengthening protections against the risks AI poses to national security and crime’. Alongside this, the government also announced a new partnership with Anthropic… the two will ‘explore’ using Anthropic’s AI assistant Claude in public services; and Anthropic will aim to contribute to work in scientific research and economic modeling. And at the AI Security Institute, it will provide tools to evaluate AI capabilities in the context of identifying security risks.”
As DeepSeek Expands, China’s Cities Roll Out ‘AI Public Servants’ - Sixth Tone, Ding Rui, February 18, 2025
“After tech firms moved quickly to adopt DeepSeek, local governments across China are now rolling out the open-source AI platform, making waves for rivaling OpenAI and Google, to automate public services. Cities across China are rolling out DeepSeek-powered AI within cloud platforms to automate governance, handling everything from administrative paperwork to public service requests.Trained for specialized tasks and tailored to individual departments, these AI systems now manage 240 administrative processes, including document processing, civil services, emergency response, and investment promotion. For instance, when drafting administrative penalties, the system generates a draft within seconds after uploading case discussion records.”
AI Infrastructure
AI crawler wars threaten to make the web more closed for everyone - MIT Technology Review, Shayne Longpre, February 11, 2025
Crawlers—automated programs that scan and index web content—now make up nearly half of internet traffic and are expected to surpass human users. These unseen systems constantly move through websites, collecting and sharing data, which companies like OpenAI use to train AI models such as ChatGPT. While they once helped websites by increasing traffic from search engines, they now power AI tools that may compete with the same sites for users. Worried about losing revenue and visibility, news organizations, artists, and developers are pushing back. In response, websites are blocking crawlers, leading to legal and technical conflicts that could reduce web diversity and change how online content is created and shared.
Musk's xAI unveils Grok-3 AI chatbot to rival ChatGPT, China's DeepSeek - Reuters, News Staff, February 18, 2025
“Elon Musk's artificial intelligence startup xAI has introduced Grok-3, the latest iteration of its chatbot, as it looks to compete with Chinese AI firm DeepSeek, Microsoft-backed OpenAI, and others. The Grok-3 debut comes at a critical moment in the AI arms race, just days after DeepSeek unveiled its powerful open-source model and as Musk moves aggressively to expand xAI's influence. The chatbot is being rolled out immediately to Premium+ subscribers on X, the social media platform owned by Musk. Musk on Monday reiterated xAI's commitment to open-source AI, saying earlier versions of Grok will be made publicly available once the latest model reaches full maturity. He expects Grok-3 to meet that benchmark in a few months.”
AI and Public Engagement
AI assistants risk misleading audiences by distorting BBC Journalism - BBC, Oli Elliott, February 2025
The BBC investigated the accuracy of AI assistants like ChatGPT, Copilot, Gemini, and Perplexity when it comes to answering questions about the news.. BBC journalists reviewed AI responses to news questions about stories reported by BBC News, checking for accuracy, impartiality, and proper source attribution. According to the BBC’s report, “The answers produced by the AI assistants contained significant inaccuracies and distorted content from the BBC.” The investigation found that 51% of all AI answers to questions about the news had “significant issues,” 19% of responses citing BBC contented contained factual errors, and 13% of quotes sourced from BBC articles were either altered or fabricated. The BBC urged AI companies and regulators to work together to ensure accuracy and protect the information ecosystem, emphasizing the need for control over content usage and transparency in AI processes
Events
Hybrid Collective Intelligence: Perspectives and Challenges - HACID Project, February 25, 2025
“This webinar will explore the opportunities offered by hybrid collective intelligence, that is, the joint problem-solving abilities of humans and machines. The HACID Project will discuss perspectives and challenges with renowned experts in the field, opening the stage to participants in a final panel discussion session. Join on February 25, 2025 to learn how hybrid collective intelligence can shape future decision support systems. Speakers include Anita Woolley, Carnegie Mellon University, Taha Yasseri, Centre for Sociology of Humans and Machines, TU Dublin, and Mark Steyvers, University of California, Irvine.”
Registration information can be found here.
Power to the Public: Making Government Work Through People-First Innovation - InnovateUS, February 27, 2025
This workshop, led by Tara Dawson McGuinness, co-author of Power to the Public: The Promise of Public Interest Technology, dives into a groundbreaking framework for improving public systems through a blend of technology, design, and empathy. You will be able to understand how the three foundational principles: design, data, delivery, can be combined to solve public problems, analyze real-world examples of how these principles have transformed public services (from ending homelessness, to rat abatement to improving uptake of benefits), and address ethical challenges, including privacy concerns and equitable access to resources.
Ending Homelessness Together: Leveraging Data and Collaboration for Lasting Solutions - InnovateUS, March 11, 2025
In this 60 minute session, you’ll gain a clear understanding of how to leverage data and collaboration to build a sustainable solution to homelessness, along with insights into addressing ethical concerns around privacy and data sharing. Join us to understand the “Built for Zero” model and how real-time data and collaboration can help communities end homelessness, explore real-world examples, such as the successes in Rockford, Illinois, and Chattanooga, Tennessee, earn best practices for using data to improve efficiency and outcomes in homelessness response, and address ethical considerations, including privacy, data sharing, and equitable access to resources.
Reading Out Loud, Growing Strong: AI Tools for Fluency Development - InnovateUS, March 12, 2025
Through hands-on experience, participants will explore ASR-driven tools for literacy instruction to support diverse learners, improve reading fluency, and foster a love of reading. Join us as we focus on mastering reading fluence using AI-powered tools that leverage Automatic Speech Recognition (ASR) to guide read-aloud practice and provide personalized feedback, and explore ASR-driven tools for literacy instruction to support diverse learners, improve reading fluency, and foster a love of reading.