
News That Caught Our Eye #15: May 8th, 2024
Published by Jay Kemp on January 1, 1970
While OpenAI and Microsoft dive deeper into election and government security work, Congress continues to take a tentative approach to drafting regulations. Furthermore, new grants fund AI research and the EU publishes the final text of its adopted AI Act. In our 15th edition, we continue to highlight the stories, research, and innovations that illuminate how AI is impacting governance and democracy.
In the news this week
- News that caught our eye:News that caught our eye
News that caught our eye
News that caught our eye
Generative AI is already helping fact-checkers. But it’s proving less useful in small languages and outside the West
Journalism experts from Norway, Georgia and Ghana are applying artificial intelligence tools in their fact-checking efforts – including the “limitations of this new technology when applied to diverse geographical contexts, for example in non-Western countries or in countries with underrepresented languages in training models.” In Ghana, content moderators are also struggling to keep up with AI-generated content and troll farms that are driving election narratives. Be sure to read the whole article for interesting case studies and tool examples for AI-powered fact-checking.
Read articleNews that caught our eye
Here's How Generative AI Depicts Queer People
Revisit this WIRED investigation from early April, which examines how many current AI tools and image-generators portray queer people – think: white people with purple hair and lots of piercings. Harmful biases are difficult to train out of models, but one strategy is to ensure LLMs “focus on well-labeled data that includes additional representations of LGBTQ people from around the world.” Scroll to the bottom for some interesting outputs from Sora, OpenAI’s new video tool that also seems to struggle with queer depiction.
Read articleNews that caught our eye
New NSF grant targets large language models and generative AI, exploring how they work and implications for societal impacts
Northeastern University, home of the Burnes Center for Social Change, has received a $9 million grant “to investigate how large language models (LLMs) and generative AI operate, focusing on the computing process called deep inference and AI’s long-term societal impacts.” To provide researchers and academics with a transparent platform to observe the internal computations of large-language models, Northeastern’s research will aim to establish a National Deep Inference Fabric and address the gap between industry and academia.
Read articleNews that caught our eye
The EU AI Act: Final Text Published
“On 13 March 2024, the European Parliament adopted the AI Act at a first reading. Now, the corrigendum of the text has been released. This document corrected the language in the Act.” -Risto Uuk, Future of Life Institute
Read articleNews that caught our eye
The limits of state AI legislation
As Congress remains largely stalled on passing federal regulations for AI, every state except Alabama and Wyoming is currently considering “some kind” of AI legislation. However, two consumer advocates warn Politico that most state laws “are overlooking crucial loopholes that could shield companies from liability when it comes to harm caused by AI decisions — or from simply being forced to disclose when it’s used in the first place.” Much of this is due to haziness around what constitutes a “trade secret,” as well as jargon around which systems are designed to be “controlling” or a “substantive” factor in decision-making.
Read articleNews that caught our eye
National Cybersecurity, AI, and Congress: An Interview with Will Hurd
In an interview with former CIA officer and Texas Congressman Will Hurd, Harvard Political Review asked the politician a series of questions about security and privacy regarding new technological threats. From a regulatory perspective, Hurd’s first concern is that AI has to follow existing law on civil rights and civil liberties. He also believes privacy should be protected at all costs, including strengthening encryption in federal IT systems and cooperation between the industry, the public, and the government to protect against foreign government threats. For the CIA specifically, Hurd also sees a strong AI use case for considering large bodies of information – if the agency can maintain information security.
Read articleNews that caught our eye
What’s Next for AI in States? An AI Sandbox
At the NASCIO Midyear Conference this week, Massachusetts CIO Jason Snyder used The Burnes Center’s AI for Impact program as an example of a “sandbox” – the kinds of working groups states are turning to for concrete use cases that will help government run and provide public services more efficiently. Georgie is also working on a similar lab, which will allow agencies to experiment with AI tools and find uses and failures “fast.”
Read articleNews that caught our eye
California and Other States Threaten to Derail the AI Revolution
California is leading the push for aggressive state legislation on AI, as state lawmakers nationally consider creating the equivalent of 50 different computational control commissions as part of their individual regulatory agendas. This move is driven by both fear and Congressional hesitancy to overregulate too quickly, and could lead to a confusing patchwork of state AI regulations that undermine innovation, investment, and competition.
Read articleNews that caught our eye
Microsoft and OpenAI launch $2M fund to counter election deepfakes
To further help stave off the risk of AI-generated deepfakes ahead of the upcoming election, Microsoft and OpenAI have jointly announced the Societal Resilience Fund – which involves issuing grants to a handful of organizations to further AI education and literacy among voters and vulnerable communities. One provided example in the article: grant recipient Older Adults Technology Services has said it will use its grant for training programs on foundational AI understanding for those aged 50 and over.
Read articleNews that caught our eye
Microsoft deploys air-gapped AI for classified defense, intelligence customers
Microsoft has deployed a generative AI model entirely divorced from the internet for the first time, safe for both classified U.S. government workloads and intelligence agencies, which can now safely harness the technology to analyze top-secret information. End users on DOD's classified network will be able to use the generative AI toolkit, but they will not have the ability to train the model itself on new information and data because the model is air-gapped – meaning the secure computer network is physically isolated from other unsecured networks.
Read articleNews that caught our eye
The House’s AI Task Force leader says regulation can’t be rushed
In an interview with California Congressman Jay Obernolte (R), Fast Company asked the co-chair of the bipartisan Task Force on Artificial Intelligence about the task force’s mission and his perspective on “going slow” with crafting regulatory federal legislation. Obernolte specifically references a variety of concerns around engaging all stakeholders, allowing industry capture, the complexity of IP law, and whether to follow the lead of entities like the European Union or empower existing regulators. Read the interview for his full comments.
Read articleNews that caught our eye
OpenAI Releases ‘Deepfake’ Detector to Disinformation Researchers
Today, OpenAI announced it will share its new tool to detect AI deepfakes – including its own– with a small group of disinformation researchers, in hopes of real-world testing and pinpointing ways it could be improved. The developer said its new detector could correctly identify 98.8 percent of images created by its latest image-generating tool, DALL-E 3. This comes alongside a series of other efforts by OpenAI to disrupt deepfake-led misinformation, including recently joining the steering committee for the Coalition for Content Provenance and Authenticity.
Read articleThis work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.