Written Testimony of Dr. Beth Simone Noveck
Chief Innovation Officer and Chief AI Strategist, The State of New Jersey
Professor of Experiential AI, Northeastern University
Director, The Burnes Center for Social Change and the Governance Lab
before
The United States Senate Committee on Homeland Security and Governmental Affairs
[Watch the hearing and oral testimony here]
Chairman Peters, Ranking Member Paul, honorable committee Members, thank you for this opportunity to appear before you in the company of such a distinguished panel to discuss how artificial intelligence technologies can be used to improve how governments at every level make policies, deliver benefits and services, and solve problems for and with the American people.[1]
I have the great honor to serve as the first Chief Innovation Officer for the State of New Jersey. Governor Phil Murphy appointed me to his cabinet in 2018. Prior to that, I served as the founding Deputy Chief Technology Officer of the United States and head of the White House Open Government Initiative under President Obama. I also served as a Senior Advisor to Prime Minister Cameron at 10 Downing Street in the United Kingdom and a Member of Chancellor Angela Merkel Digital Council in Germany. I am also a professor of Experiential AI at Northeastern University, where I direct the Burnes Center for Social Change and its partner project the Governance Lab and I lead our AI for Impact Coop Program, where we train the next generation of leaders and problem solvers to use AI for social good. For the last twenty years, I have designed and built public interest technology to improve governance and strengthen democracy and am the author of three books about governance innovation.[2]
De-Hyping AI: Tools and Methods for Data Processing
If we are to realize the benefits of artificial intelligence for improving how governments serve their residents, we need to have a common understanding of what this technology is. Despite doomerist hype and headlines, artificial intelligence is not sentient. AI comprises a set of data processing tools and methods. IBM puts it simply: “artificial intelligence is a field, which combines computer science and robust datasets, to enable problem-solving.”[3]
Using historical information as input, the computer using machine learning learns to spot patterns and make predictions based on past examples. For example, supplied with 200,000 images of known cancerous tumors, MIT’s Mirai software can analyze new mammogram images and “predict nearly half of all incidences of breast cancer up to five years before they happen.” The tool is equally accurate for both white and Black women.[4]
Such pattern-recognition techniques have powerful relevance for a wide variety of public purposes. The NSF-funded Traffic Jam tool helps law enforcement agencies speed up the identification of human trafficking and find missing persons. Traffic Jam uses machine learning to scour online ads selling sexual services to spot those that mask modern day forms of slavery.[5]
New generative AI platforms such as ChatGPT (made by OpenAI), Gemini (Google), and Claude (Anthropic) are examples of machine learning tools trained on large datasets of human language. Also known as large-language models, they can generate fluent text. This is what makes these new generative AI applications appear so human rather than machine-like. However, generative AI is not intelligent in any meaningful sense. Having ingested trillions of words as training data, they can replicate the patterns of human language. They do this in essence by predicting the most likely word to come next in a sentence in response to plain language directions, known as prompts.
Most generative AI systems are “multi-modal,” they work with both spoken and written language. The chatbot, Pi, short for Personal Intelligence, not only types, but it can talk, replicating the affect, tone, and rhythm of human conversation.[6]
Now organizations can also customize their own generative AI, training a model with specific texts and data. Thus, for example, my students in the AI for Impact Coop program at Northeastern are working with the Massachusetts Department of Transportation to create an AI tool to train new engineers.[7] The chatbot they are building with the agency this spring will be trained exclusively on existing agency documentation and procedures so that new hires can get faster answers to questions about how the agency works.
Individuals, too, can take advantage of these build-your-own capabilities without any prior technical knowledge. My son uploaded copies of Shakespeare’s comedies to Google’s free Notebook LM product and the tool quizzed him exclusively on those texts ahead of a school English test, peppering him with customized study questions, (and, yes, he got an A).
The Power to Create and Analyze
Generative AI acts as a next-generation word processor and offers a powerful way to create more accessible and intelligible first drafts of speeches, policy documents and government websites in multiple languages. Jurisdictions like the City of Boston, which encouraged early adoption of and experimentation with generative AI, are now using such tools to write simpler, more intelligible government websites and more compelling job descriptions.[8]
Generative AI image generation platforms Midjourney, Stable Diffusion and DALL-E, which create original images, help with designing arresting website materials to promote tourism and economic development.
Yet, the most thrilling aspect of generative and older types of AI lie in their capability to analyze content, even more than create it.
ChatGPT, Bard, Claude, Pi, and other commercially available generative AI platforms analyze and summarize as well as create text and software code (the language of computers). This capability enables governments to effectively scrutinize and modernize legacy computer code, such as COBOL, a programming language dating back to 1959, which still supports many critical public systems and for which there is a dwindling supply of knowledgeable programmers.
In the Office of Innovation in New Jersey, our talented team of engineers, designers and policy professionals who use technology, data and community engagement to build better citizen services, leverages generative AI for scrutinizing and testing software code, aiding in the modernization of complex and dated government systems.[9] Our engineers have integrated “copilot” tools, which suggest code as they work yielding up to 55% faster code creation.[10] Like the autocomplete that suggests the next line for your email, such aids speed up software development. Across a wide variety of benefits and services, our team also uses ChatGPT to write software tests, adding more resilience to critical digital applications.
Image generation platforms Midjourney, Stable Diffusion and DALL-E interpret and explain images in addition to creating them. For example, try snapping a picture of the contents of your refrigerator and asking a platform like DALL-E what to have for dinner. The technology can analyze what’s in the image and make suggestions.
For public professionals, the ability to analyze images with older forms of computer vision and newer kinds of generative AI translates into the ability to assess environmental transformations, including deforestation, urban expansion, and climate change impacts, using satellite imagery and photographs. For example, trained on images of past wildfires, ALERTCalifornia’s AI system made Time Magazine’s list of inventions of 2023 because it can scan new images from 1,050 cameras to provide early warnings and reduce the risk of devastating fires. In two months alone, the UC San Diego system spotted 77 wildfires before anyone called them in.[11]
Figure 1 – AlertCalifornia - https://ops.alertcalifornia.org/
Creating Listening and Learning Institutions
Despite having funded the creation of the Internet, the public sector lagged in its adoption. Now the public sector has the opportunity to lead on the use of artificial intelligence for public good.
AI has the potential to make government more conversational and transform how the public and government interact because of its power to sort, organize, and summarize vast amounts of data.
Most government information is trapped behind hard-to-navigate websites and inscrutable PDFs that are hard for the public and public servants to find, let alone understand.
At the same time, public knowledge, which is widely distributed, is equally hard to make sense of. Ten years ago, it was estimated that 1.8 million scientific journal articles are published each year, and that number has only increased.[12] In addition, over the last thirty years, the World Wide Web has democratized the ability to communicate. Ironically, by making it easier to speak, the Internet also made it that much harder for us or our institutions to hear one another.
This paradox was evident when President Obama’s transition team in 2008, on which I served, invited public ideas for the first 100-day agenda.[13] The overwhelming response of over 125,000 people contributing 44,000 ideas demonstrated the challenge of too much input and not enough capacity to process it – a real-life example of the adage: “dog chases ambulance, dog catches ambulance.”
By enabling the public and institutions to be heard and understood, we create new possibilities for enhancing service delivery, supporting public sector workers, strengthening data utilization, bolstering resident engagement, and amplifying problem-solving capabilities.
I explore each of these impacts of AI briefly, in turn, below, outlining the opportunities for responsibly using AI to support better customer experience.
1. Making Government More Intelligible: Creating Conversational Government
During the COVID-19 pandemic, New Jersey’s Office of Innovation developed Covid19.nj.gov, a one-stop website providing clear, bilingual answers to public queries. Covid19.nj.gov served 110 million people during the pandemic because we provided plain language responses to questions like “where do I get a vaccine” or “how do I find COVID financial assistance programs for businesses.”
The COVID Information Hub was, by no means, the only such “one stop” website. However, such centralized information hubs remain the exception, not the norm, in government information dissemination. The complexity of government information can be as perplexing as determining which agency regulates which type of pizza—frozen pepperoni by the USDA; frozen cheese is the FDA.
Today, AI is making it easier for residents to get answers to their questions without needing to know where to look or even what to look for ahead of time. Now, instead of a one-way, 9-5 broadcast of information, governments could have a 24/7 conversation with residents at the public’s convenience.
In New Jersey, for example, we are using generative artificial intelligence to make it easier for residents to get answers to their questions. We have moved many call centers to platforms that support AI-based text-to-speech. This means call center staff can write and publish menu options and messages in multiple languages very quickly, giving the public access to more self-help options.
For example, in just the first three months, 15% of those calling in to ask questions about the State’s property tax relief program are resolving their issues successfully through self-serve tools, including web- and phone-based chatbots.
The small initial decrease in call volume translates into a 50% increase in the resolution of calls by human operators. Our results are consistent with recent research about private sector call centers. Giving workers access to generative AI that gives real-time recommendations to those answering the phone about how to respond leads, according to MIT professor Erik Brynjolfsson, to “more productive workers, happier customers, and higher employee retention.”[14] We are also implementing AI-enabled web-based chatbots that answer questions via the Web and text messaging as well as phone.
New Jersey is not alone in turning to AI to answer resident’s questions and make government more accessible. Back in 2016, the Australian government set up a virtual assistant called Alex to answer questions. In the first 18 months, Alex had over 2 million conversations with an 88% first contact resolution rate, leading to an initial 10% reduction in calls and almost ten million in taxpayer dollars saved through digital self-service.
Closer to home, the US Citizenship and Immigration Services has a chatbot called EMMA. Named for poet Emma Lazarus, Emma answers questions about immigration services in both English and Spanish, directing a user to the right place on the website to get an answer to an immigration question. When I worked for the Obama Administration, we set in motion the dream to be able to answer questions about where an immigrant’s application is in the queue, which has rapidly become real. Four years ago, USCIS was already serving over a million users a month with this chat assistant.[15]
Northeastern students in the AI for Impact Coop program are working with the California-based nonprofit Innovate Public Schools to help families of schoolchildren with disabilities translate and summarize their child’s Individualized Education Program they receive from their school district. According to the National Center for Education Statistics, 15 percent of public school students are entitled to services for individuals with disabilities.[16] Yet the Individualized Education Program that describes the accommodations and benefits a student receives are often 50-100 page PDFs with complicated legalese.[17]
Figure 2 - A-IEP - https://a-iep.com
Students in the “A-IEP” project team, working under the guidance of Tufts Professor Fahad Dogar and Northeastern Professor Sofia Bosch Gomez worked with Innovate Public Schools and a network of parents to design a tool that enables families to “have a conversation” with the IEP. Families upload their IEP. Families can then securely and privately ask questions in English or Spanish about the document, such as “what are the accommodations to which my child is entitled.” Because generative AI makes creating software so much cheaper and easier, the students built a fully functioning tool in one semester.[18]
Finally, governments typically organize the delivery of benefits and services in ways that make sense to government bureaucracies, but not necessarily to residents. For example, in many states, if I want to start a business, I must know to visit the website for a state’s secretary of state. Then I might need to visit the treasury website to pay my taxes, while the permits I need to obtain are scattered across multiple websites from environmental protection to community affairs.
In New Jersey, the Office of Innovation created a “one stop” website called business.nj.gov, where over a million people have logged on just in the last year to get answers to their business questions and 26,000 new companies have been created. An AI chatbot helps entrepreneurs get answers to questions based on information across multiple agencies.
Figure 3 - Business.NJ.Gov - https://business.nj.gov
Instead of having to know which websites to visit, residents should be able to get answers to questions from one place. Similarly, in the City of Boston, they are integrating generative AI with the City’s open source blogging platform where information is published. Previously, someone wanting to know all the things they need to do if they move to the city (e.g. getting a parking permit for their moving van, obtaining a rental inspection, registering to vote) would have to hunt for this information in dozens of places. Or the City would have had to prepare a guidebook, pulling this information together—and risk missing something or the information becoming out of date.[19] With generative AI, this information can be extracted automatically across multiple websites, and then checked by government officials.
When residents get their questions answered faster, they do not need to sit on hold, go to a government office or visit dozens of different websites. Creating a more conversational government decreases the time, aggravation, and cost to taxpayers.
Over the next two years, every government agency at every level of government should leverage AI to: 1) offer the ability to ask and get answers to questions about policies, benefits and services 24/7; 2) consistent with longstanding federal requirements, make information available in plain English; and 3) make it easier to get answers, benefits and services from one place and across agencies in ways that are intuitive to residents and those who serve them.
2. Making Government More Accessible
Generative AI and the large-language models that underlie these tools are trained on text and speech from a variety of languages, enabling them to aid with transcription and translation and creating the opportunity to make government institutions more accessible to more diverse populations.
We have already experienced the productivity benefits of AI-enabled transcription when third-party tools like Otter, Woodpecker, and Fireflies (for some reason, AI transcription tools all have cute animal names) transcribe, summarize, and extract action items from our online meetings. Zoom and Teams both have built in AI transcription as well. When we can find the key topics, conclusions, action items, and insights from a meeting, we can work that much more efficiently. The City of Boston is using this summarization capability in an exciting way. The City’s CIO Santiago Garces is supporting the city council using generative AI to automate the creation of summaries of city council meeting minutes and votes. When the public can read a ten-word summary, instead of just a docket number, deliberations become more accessible, transparent, and accountable.[20]
Many government entities are using the ability of AI to transcribe and summarize speech to improve the quality-of-service delivery. In Singapore, for example, the government uses voice transcription to turn emergency calls into text. Their specially trained AI recognizes English, Mandarin and Malay as well as the local “Singlish” inflection. The faster, more accurate logging of calls is designed to improve response time and ensure that first responders have the right information when they respond to a call.[21]
Transcription services are also helping legislatures improve how they work. In India, the parliament’s Digital Sansad software takes advantage of AI to provide members and the public with real-time translation, capturing word-for-word what is said in parliament and translating into one of India's twenty-two regional languages and dialects.[22] In the Netherlands, Speech2Write not only turns speech into text but it turns spoken text into edited, written reports. In the European Union, too, the parliament transcribes plenary sessions and committee hearings with artificial intelligence. Human translators proof the translations prior to publication.[23]
AI works well in multiple languages. For example, researchers at Meta mined ten thousand hours of spoken texts for the most common languages (e.g., English, Chinese, Russian, Spanish, French, Japanese and German) and a thousand hours for other languages to create Seamless4MT (Massively Multilingual & Multimodal Machine Translation). SeamlessM4T offers text and speech translation in an impressive array of languages. Type any one of 101 languages and the tool will translate or voice what you said in any one of 36 languages, from Chinese to Tagalog to Western Persian. The AI model can handle several languages at the same time and combine speech and text translation. The tool is open source and therefore can be freely incorporated into other applications.[24]
With machine translation platforms such as SeamlessM4T or Google Translate and other specialty platforms for minoritized and indigenous languages, it is becoming faster and more cost effective to do translation. We can now translate government policies, procedures, forms, services, and education into multiple languages. Imagine how much better a public hospital can serve its populations, especially its veterans, when services are available in their language. Imagine how much better a resident can navigate city hall, the courthouse or the DMV when they can understand the instructions. Imagine how much more economically competitive our children will be when they can enjoy robust bilingual education enabled by AI.
The breakthroughs that paved the way for generative AI have enabled us to make radical progress in the fidelity, accuracy and speed of machine translation, enhancing accessibility and inclusivity. Because machine translation is so fast, it will be especially useful in emergencies when there is a need to disseminate time-sensitive information, as we saw during COVID. Of course, human translators are still essential for capturing nuance.
Over the next two years, government agencies should use machine translation, even if imperfect, to translate all resources into multiple languages for easier and more efficient access and understanding by residents.
3. Making Public Professionals More Effective: Supporting Public Workers with AI
To make it easier for public professionals to do their jobs well, many jurisdictions are also building employee chatbots. The City of San Francisco, for example, developed a chatbot called PAIGE (Procurement Answers and Information Guided Experience) to answer worker questions about doing business with the city in order that public professionals can respond better to the public.[25] The Clerk of the Superior Court in Maricopa County, Arizona has an AI assistant known as YODA (Your Online Digital Assistant) to help its employees obtain information.[26]
Similarly, our AI for Impact students are working with MassHealth, the agency that administers Medicaid and the Children’s Health Insurance Program, to create an internal knowledge management tool that gives workers rapid answers to fast-changing policy and procedural guidelines to improve their ability to serve the public.
Generative AI also helps the New Jersey Office of Innovation and our colleagues at the State’s Department of Labor and Workforce Development draft email responses to claimants in plain and accessible language. By supporting workers, we improve how we deliver unemployment insurance. It is one of the reasons New Jersey has been able to bring down the time it takes a resident to apply for unemployment benefits by 48 minutes per application.
And on business.nj.gov, we are not offloading citizens to a frustrating phone tree. Rather, AI supports a team of expert professionals from the State’s business action center. When someone writes in with a question, it is a human overseeing the drafting of the response. Answers are then stored in a database so that the next person to ask the same question via the chatbot benefits from the answer. The AI augments the efficiency of our public sector workers and makes that knowledge more widely available.
Finally, AI is also helping to support public sector workers in the detection of fraud and financial oversight, which, in turn, is bringing down the cost to taxpayers of delivering government benefits and services. A 2020 Administrative Conference of the United States report found that 45% of the 142 agencies surveyed were already using AI to combat fraud.[27] The U.S. government uses AI across various agencies to enhance fraud detection and streamline analysis. The SEC, for example, uses AI to assess risks in corporate filings while the IRS automates the process of spotting fraudulent tax returns. Medicare and Medicaid claim savings of nearly $1.5 billion since 2011 from spotting improper payments.
Of course, it is essential to use AI responsibly with human oversight to ensure that automation of financial oversight does not inadvertently lead to dangerous mistakes that deprive eligible individuals of their benefits or worse.[28] In Michigan, for example, a machine-learning based fraud detection system implemented a decade ago (and since scrapped) made fraud accurations against tens of thousands of wrongly accused individuals. A later review uncovered that 93% of the fraud determinations were wrong, leading to garnished wages and ruined lives.
We must ensure, as the UN Special Rapporteur on Extreme Poverty and Human Rights put it, that we do not stumble “zombie-like into a digital welfare dystopia.”[29] But the same kinds of tools being used to identify fraudsters can also be used to identify public program participants who are entitled to benefits but who are not accessing services. We can do more to proactively supply people with the benefits to which they are entitled.[30]
Over the next two years, all agencies should be taking steps to support its workers with the implementation of responsible AI systems that incorporate human oversight, ensuring we are taking advantage of the longstanding know-how of public professionals and getting the benefit of what humans and AI each have to offer. By supporting government workers with AI, we will improve state effectiveness, increase employee retention, and strengthen the talent pipeline of new hires into better functioning government.
4. Improving Government Capacity through Better Use of Data
AI, above all, is a set of tools and methods for processing data. Even when data is in machine-readable formats, large quantities of numerical data can be unwieldy, requiring assistance from data scientists who are in short supply. Much government data, too, is not well organized or structured. If we are going to collect data in government, we should use it to make government work better. AI is making data processing easier.
Agencies are using AI to analyze data from past calls, website searches and other citizen info to streamline how they deliver benefits and services. In New Jersey, where we design government operations with rather than for residents to ensure that we are prioritizing information and services that residents tell us they want, we actively invite citizen comment. For example, in support of unemployment insurance claims, we receive thousands of comments from residents, giving us feedback. We would need an army of personnel that we do not have to read through all the comments. Instead, AI parses the comments, removes the filler, and tells our team what information the public wants.
For instance, during tax season last year we saw lots of questions about where to find a previous year’s 1099 forms, so we were able to move the answer to that question front and center, reducing call volumes and satisfying demand.
Amsterdam is doing something similar with the data from its version of 311 calls. The city uses AI to analyze data from calls and social media posts to identify areas of the city that are experiencing problems such as littering or noise pollution and target services to those areas. The city can then deploy scarce resources to address these issues more quickly and efficiently.[31]
Google's AI-powered Flood Hub provided crucial flood warnings in Chile in 2023, enabling tens of thousands to evacuate ahead of impending floods and averting potential disasters. Launched initially in India in 2018, it has since expanded to over 80 countries. Using thousands of satellite images to create digital land models and combining them with weather forecast data, Flood Hub can predict riverine flooding days in advance and send out alerts to residents, local leaders, and media broadcasts.[32]
AI-enabled smart infrastructure could be used more to intelligently prioritize needs, from road to bridge repair.[33] For instance, digital scanning equipment, combined with AI 3D mapping, can create precise digital replicas of urban areas, pinpointing issues like potholes and evaluating conditions. This predictive approach not only cuts costs but also allows for more frequent and accurate inspections. Regular scans enable continuous monitoring of deterioration, fostering efficient maintenance schedules and long-term savings.
The challenge of needing to do more with less is becoming increasingly urgent. With fewer than 7% of federal workers under the age of 30 (compared to 20% in the US workforce) and many federal workers over the age of sixty facing retirement, workforces will become depleted without reinforcements.[34]
At the Food and Drug Administration (FDA), for example, just over 1200 are responsible for overseeing the safety of the nation’s food supply. That’s 1200 people who are supposed to inspect 300,000 restaurant chain establishments, 275,000 food processing facilities, and 35,000 produce farms![35] We bear the consequences of this limited staffing: the Centers for Disease Control estimate that 48 million people get sick, 128,000 are hospitalized, and 3,000 die from foodborne diseases each year in the United States.[36] The dangers of the FDA's shortcomings were exemplified by its delayed response to safety violations at Abbott's baby formula factory, which resulted in babies becoming ill and some dying in 2022. The failures are hardly a surprise given that only nine people work in the department overseeing baby food inspections.
Imagine every investigative agency now augmenting its workforce using AI to analyze available data faster and better and improve its response rates.
Over the next two years, every agency, especially those with investigative responsibilities for health, safety, and welfare, should leverage artificial intelligence to improve its performance and strengthen its capacity to protect the American people.
5. Improving Public Engagement
The Web—and the social media we developed on it—has often left us drowning in too much information and misinformation. People might talk but it is very difficult for either the agency or the public to hear because of the volume of comments.
The Administrative Procedure Act of 1946 was a landmark piece of legislation that granted Americans the right to comment on pending federal regulations. In 2017, the Federal Communications Commission issued a draft regulation on Internet neutrality seeking to overturn an Obama-era rule banning Internet Service Providers from loading certain websites faster than others. The public input leading up to that change received a staggering 22 million comments.
Not surprisingly, only six percent of the net neutrality comments were unique. The deluge of “astro-turfed” comments was both widespread and systematic. Ninety-four percent of comments were duplicates, some submitted hundreds of thousands of times, many under false names, including 7 million from a single account. Even with “only” 1.32 million non-duplicates, that is too many for the public or policymakers to read.[37]
Enter AI. There exists a wide array of tools to make citizen engagement more effective and enable both the public and the institution to make sense of what is being said. Agencies should be using these tools—and supporting the creation of new ones—to enable broad public participation into and oversight over how agencies use AI.[38]
Some federal agencies have used so-called de-duplication software to remove identical comments and enable agencies to spend more time reading unique comments. Such de-duplication software has been around for over a decade but used inconsistently. Generative AI would make it easy to remove duplicates and extract unique comments to inform the crafting of regulations.[39]
Over 500 governments use the Citizen Lab platform, which incorporates AI to cluster, group and organize public comments.[40] AI is much faster than humans at making sense of large quantities of text and can automate the process of summarizing what people are saying, classifying submissions by topic, and sorting them into categories to make it easier for governments and the public to read.
The city of Hamburg in Germany uses natural language processing to sift through thousands of resident contributions about city planning on DIPAS (short for DIgital PARticipation System). Drop a digital pin on a map of the city and append your proposal about planned urban development projects. In response to a recent request for comments on a new cycling concept for the city’s westernmost borough, residents submitted over 3,000 contributions, too many to read. But the platform’s AI tools automate the analysis of the contributions, making it possible to cluster, visualize, and organize the steadily increasing number of resident contributions. The system is even sophisticated enough to understand the Plattdeutsch dialect, local government acronyms, and specialist planning jargon.[41]
Your Priorities, a free, open-source tool for public engagement from Citizens Foundation and used in thousands of online engagements globally, uses AI to scan incoming postings for toxicity. AI automates the process of spotting offensive speech that violates terms of service and community norms, bringing such language to the attention of moderators and reducing the cost of organizing engagements. Instead of wading through every submission, both the public and policymakers can ask Your Priorities’ AI (it incorporates ChatGPT “under the hood”) to summarize key ideas, spot weaknesses, and even suggest improvements to ideas submitted. Anyone can ask the website to: “show me all the comments on Topic X” or “summarize the best comments on Topic Y.”[42]
At MIT, the Cortico project uses machine learning to synthesize comments in live, face-to-face discussions. In Madison, Wisconsin, for example, small groups of citizens participated in community conversations about the city’s police, which had come under scrutiny for violence against Black residents. The conversations were recorded using Cortico. The machine learning tool highlighted topics of common concern and these became the basis for interview questions for the four finalists for the chief of police job.
For those who struggle with language barriers, tools like ChatGPT or Anthropic can translate poorly worded ideas into well-crafted contributions. Those not fluent in the dominant language or who are simply not good writers can turn a vague idea into clear, clean prose. Prompt one of these free genAI tools to write a fifty-word proposal for a plastic bag tax and the machine returns: “To combat plastic pollution, we propose implementing a bag tax. Data shows that after Ireland introduced a €0.15 tax in 2002, plastic bag usage dropped ninety percent within a year, significantly reducing litter and promoting reusable bags. This tax would encourage sustainable behavior, cut landfill waste, and generate revenue for environmental projects.”
Image generation tools are also enabling citizen engagement. UrbanistAI, a Finnish-Italian initiative, is using AI to turn the public’s ideas for how their city should be designed into hyper-realistic photographs that communities can discuss. Urbanist facilitates co-design workshops in urban planning around the world. In Helsinki, the technology is helping residents and city officials to design car-free streets together. Using AI prompts, participants can visualize changes like adding planters or converting roads into pedestrian zones. The technology even incorporates a voting feature, allowing community members to weigh in on each other’s designs. Now you don’t need a degree in urban planning or artistic skills to see how your ideas could transform your community.[43]
Figure 4 - UrbanistAI - https://urbanistai.com
Since 2022, Car Free America has been posting images on Instagram, Facebook, YouTube, and TikTok. Instead of telling the viewer about their dream for cities with fewer cars and more welcoming architecture, the urban planning activist behind the channel uses generative AI to show an alternative, human-centered vision for downtowns like those of Cincinnati, Fort Wayne, and Austin.[44]
Used well, these tools portend a powerful new era of citizen collaboration and codesign with more inclusive and diverse participants.
Over the next two years, CAIOs should adopt innovative public engagement through AI, ensuring that every voice is heard and accounted for in the policymaking process. AI-enabled public engagement should be used to advance public input into the governance and use of AI by government, To take advantage of AI’s unmatched potential to analyze public sentiment, manage feedback, and scale engagement across diverse demographics, federal agencies should implement AI solutions that facilitate and enhance public consultations.
6. Solving Complex Problems More Effectively
In addition to helping us make sense of too much information, artificial intelligence is also enabling new forms of complex problem solving that were not possible before.
We are also turning to artificial intelligence to unlock the experience and know-how of global experts. AI is making it faster and easier to identify innovative strategies to combat election-related violence and election subversion and strengthen our democracy.[45]
The first step in tackling any complex challenge is to break it down into smaller, more manageable problems. Election subversion, for example, comprises myriad issues from media-fueled doubt about election integrity to violence against election officials to vulnerabilities (real and perceived) with election technology.
But identifying those constituent problems typically involves weeks of research and interviews followed by additional months, if not years, of due diligence to figure out what’s been tried, whether what’s been tried has worked, and whether what has worked elsewhere is transferable and likely to work in additional communities.
To help speed up the process of defining the problems and coming up with solutions to election subversion, my team at the GovLab enlisted the expertise of the Icelandic civic tech entrepreneur Robert Bjarnason. Bjarnason has been designing platforms used in over ten thousand citizen engagements globally since 2008, including Your Priorities.
Together, we invented the free, open source toolkit Policy Synth to increase the speed, accuracy and scale of “smarter crowdsourcing” using a fine-tuned version of GPT-4, Open AI’s multimodal large language model.
Policy Synth uses AI to improve complex policymaking. PolicySynth automates the creation of over a thousand different search queries, from general to scientific, to data-specific and news-related, to conduct a comprehensive search for problems and their root causes. This enabled us to break down the complex problem of “election subversion” into myriad smaller challenges automatically, identifying several dozen, more tractable challenges.
Figure 5 - Policy Synth - https://policy-synth.ai
From among the longer list of problems. we selected which topics we wanted to focus on. For example, one specific topic was the misuse of the administrative and legal systems. Election deniers have knowingly filed multiple malicious lawsuits with the goal of overturning electoral outcomes or filed frivolous public records requests with no real purpose but to gum up the works of the election system.
In 2023, for example, we rapidly convened 35 specialists for a two- hour, online conference via Zoom where they proposed 14 solutions to the legal abuse problem, such as investing in professional organizations with disciplinary authority to punish malicious lawyers and improving education about professional responsibility in law schools. AI helped us to summarize and extract the learnings from two hours of simultaneous talking and typing in minutes, rather than days. We repeated such online convenings for other topics.
In parallel to asking people, we also asked Policy Synth to generate its own list of solutions. GPT agents searched the Web to identify solutions that are responsive to the problem. After generating hundreds of solutions, we automated the process of removing duplicates and isolating only those solutions that are relevant for a philanthropy (as opposed to a government or company).
This filtering process, which Bjarnason calls “reaping,” produced a list of 60 solutions for each identified problem, each accompanied by a visual illustration from the image-generation tool StabilityAI, in a human-readable format with pros and cons for each solution.
Policy Synth yielded the same 14 solutions to legal abuses as those identified by the human experts but also introduced additional solutions, such as establishing a legal defense fund for administrative officials and mental health support for election workers.
Policy Synth does not just generate solutions, it also evolves the recommendations using a genetic algorithm. The software combines recommendations and then tests how well the new version of the solution fits the stated problem to see if the improvement should be adopted or rejected. With fifteen rounds of such mutation and ranking, Policy Synth produces a final list of approaches tailored to addressing the problem.
Policy Synth also employs Elo Scoring to rank the solutions. Named after chess master Arpad Elo, Elo Scoring shows how skilled a chess player is, not by factoring in the number of wins alone, but by whether the win was against a better or worse player. This pairwise comparison helps people to know how good they are.
Similarly, the Policy Synth AI compares each solution one to the other and scores them based on requested criteria such as implementation speed, cost, potential for political disagreement, or impact on women or African Americans.
Thus, we were able to take recommendations generated by AI and by human experts and use one to rate and rank the other’s proposals. To be clear, the decision is not left to an AI algorithm. Rather, organizers working with human experts are leveraging AI as a research aide.
Now we are working with the Burnes Center for Social Change, the Museum of Science, New England’s largest cultural institution, and Boston Public Schools, to ask people nationwide about the crisis of literacy in America. According to Nation’s Report Card, only 33% of fourth graders were proficient readers in 2022 and we want to understand why this problem persists.[46] Policy Synth has helped us to conduct the research to identify 150 possible root causes of the problem of low literacy so we can ask parents, students, and educators to say which problems are the most important prior to combining human and machine intelligence to search for solutions.
Over the next two years, agencies should experiment with combining artificial intelligence and collective intelligence. When we can blend machine precision with human wisdom, this has the potential to accelerate how we solve problems and deepen democracy. As we navigate this new frontier, let's not forget: technology can inform, but people decide.
A Note on Responsible Use
In this testimony, I focus largely on the benefits of AI use for the public sector. However, that use must be ethical and responsible. In New Jersey, we tell our public professionals to abide by four core principles that define responsible use: Empowerment, Inclusion and Respect, Transparency and Accountability, and Innovation and Risk Management.[47]
Empowerment focuses on harnessing AI to enhance our services and products, ensuring they are delivered efficiently, safely, and equitably. This approach relies on the judgment and expertise of our professionals, leveraging AI as a tool to augment their capabilities.
Inclusion and Respect highlight the importance of using AI to uplift communities, particularly those historically marginalized. We aim to utilize these tools in a manner that embodies our values of equity and social justice, ensuring that every community relates to the necessary resources to thrive.
Transparency and Accountability are central to building trust and facilitating collective learning. When using AI, it is crucial to disclose its involvement openly, sharing workflows with other public servants and the public to foster a transparent environment.
Innovation and Risk Management encourage responsible experimentation while maintaining control over privacy and security. We understand that the risks associated with AI may not always be apparent, and thus commit to ongoing risk assessment.
To help mitigate risk, we suggest four key tactics that individuals must adopt when using new generative AI tools:
1. Ask - Early and Often - The more you experiment with different ways of steering the tools, the faster you will learn how to instruct them to yield the best results and avoid mistakes and problems.
2. Fact Check - Verify all AI-generated content, especially for public use, watching out for incorrect facts, events, links, or references, biased, or harmful information and getting information reviewed before posting.
3. Disclose - Label content created with generative AI to that effect.
4. Sensitive Information - When prompting the AI or using AI models, never input sensitive or private information.
Of course, the risks arising from the use of new technologies are myriad and go far beyond individual user error. They include malevolent attacks, described at length in the most recent report from the National Institute of Standards and Technology to badly designed tools that simply do not accomplish their stated purpose to tools that are designed to earn a profit at the expense of the public interest or tools that rob humans of decision making autonomy.[48]
We cannot weigh what is acceptable risk, however, without also understanding the benefits. Because the risks arising from the use of technology are covered in depth elsewhere, I have intentionally focused this testimony on ensuring we understand the potential benefits.
Building an AI-Ready Public Sector
If we want to realize the benefits of AI for serving residents, there are two immediate priorities: opening more data and training public professionals.
Congress should redouble its commitment to opening government data to power the AI revolution. We need large quantities of data to train AI models, especially generative artificial intelligence. Government data, which is already required to be open and publicly accessible in machine-readable formats without legal or technical restrictions under the Open Government Data Act, has helped to train large language and other machine learning models. SEC data, patent data, and other federal agency data that we opened up as part of the White House Open Government initiative in the Obama administration has been instrumental in enabling the creation of better AI.[49] Now to improve the robustness of our AI tools and avoid the need for an AI company to take advantage of the copyrighted content developed by another company, Congress should ensure that agencies have the resources to create and publish data in machine readable formats, going from promises to practice and opening up the data that taxpayers own and have already paid to collect.[50]
Now to create a federal government capable of using AI to better serve the public, Congress should build on the historic strides of the AI Training Act spearheaded by this Committee, and expand training to the entire federal workforce, not just senior officials. Furthermore, training should focus on how to use AI to make government services better, rather than on awareness of AI. Since we cannot hire enough AI professionals fast enough, we must create them by mandating broad, free training in AI.
President Biden's 2023 Executive Order on Artificial Intelligence aptly calls upon agencies to “increase the availability and use of AI training and familiarization programs for employees, managers, and leadership.” While there is an abundance of free AI content available online, the vast majority is designed for private sector use.
There is urgent demand for training tailored to the unique needs and responsibilities of public professionals.[51]
InnovateUS, which I lead, delivers free, independent, online training in responsible AI use to public sector professionals. We are run by public servants for public servants and governed by a board of public sector professionals. Philanthropically funded, non-partisan and free to all learners, InnovateUS has committed to train at least 50,000 learners over the next three years.[52]
InnovateUS delivers free, weekly, skill-building workshops on practical AI topics, featuring luminaries like Santiago Garces, CIO of Boston, teaching generative AI policy writing or Chris Rein, CTO of New Jersey, educating on how to be an AI evangelist. Jennifer Anastasoff and Cassandra Madison, leading figures from the Tech Talent Project, delve into the crucial subject of bringing AI talent into government service. Later this month, I’ll teach how to use AI text tools and how to use AI image tools, reprising workshops I delivered before Christmas, each with hundreds of participants.
We have videos online, offering hands-on tutorials on how to use generative AI tools. A multi-part at-your-own pace courses on responsible AI use will launch early in 2024, following consultation with federal and state public servants and experts from industry, academia and civil society co-hosted by the Partnership for Public Service and the Beeck Center. Courses will include in-depth instruction in how to use AI in government as well as how to create AI-ready organizations.
In a world where AI is underpinning virtually every technological advancement, every government official, regardless of their role or background, must acquire a foundational understanding of these technologies, their potential, and their ethical implications. Governor Murphy has made the commitment to upskilling public professionals in New Jersey in collaboration with InnovateUS
AI training for public servants should be free of charge as is the practice in every other country. Yet on the civilian side, the Office of Personnel Management, by contrast, is required to charge a fee for its training programs. Even if the individual applies to their agency for reimbursement, too often programs do not have budgets set aside for up-skilling. If we want public servants to understand AI, we cannot charge them for it
AI training for public servants should also be easy to find. In Germany, the federal government’s Digital Academy offers a single site for digital up-skilling to ensure widespread participation.[53] By contrast, in the United States, every federal agency has its own (and sometimes more than one) website where employees look for training opportunities. While the Department of Defense has started building USALearning.gov so that all employees could eventually have access to the same content, this project needs to be accelerated.
Data on the outcomes of AI training should be collected and published. The current absence of data on federal employee training prevents managers, researchers, and taxpayers from properly evaluating these training initiatives. More comprehensive information about our public workforce, beyond just demographics and job titles, could be used to measure the impact of AI training on cost savings, innovation, and performance improvements in serving the American public.
Summary of Recommendations
Used responsibly, AI has the potential to transform how governments at every level deliver benefits and services, dramatically enhancing customer experience and, I hope, improving rates of trust in government.
To realize this vision of AI-enabled public administration, Congress should support:
- AI-Enabled 24/7 Information Services: Agencies to implement AI systems to provide round-the-clock information services, allowing residents to ask and receive answers about policies, benefits, and services anytime.
- Cross-Agency Information Integration: Agencies to prioritize projects that consolidate information across various websites and agencies, presenting it in ways that are intuitive to both residents and service providers.
- Plain English Information Accessibility: Agencies to ensure that information is available in plain English, adhering to federal requirements for clarity and comprehensibility.
- Multilingual Translation of Information: Agencies to use machine translation, despite its imperfections, to translate all resources into the major languages spoken within their communities for more accessible and efficient understanding by residents.
- Supporting Workers with Responsible AI: Agencies to integrate responsible AI systems with human oversight to support their workforce. This will enhance government effectiveness, improve employee retention, and attract new talent, while leveraging the combined strengths of human expertise and AI.
- AI for Improved Agency Performance and Investigations: Especially in agencies responsible for health, safety, and welfare, artificial intelligence should be leveraged to improve inspections, enhance performance, and strengthen the capacity to protect citizens and overcome capacity deficits.
- AI to Reduce Fraud: Agencies to use AI specifically designed to analyze large datasets, identifying patterns, anomalies, and potential areas of concern, particularly in spending and benefit distribution, integrating AI technologies with human oversight for real-time monitoring and swift responses to irregularities. An emphasis on public transparency and accountability is essential, with regular publication of the outcomes.
- Adopt AI for Public Engagement about AI: Chief Artificial Intelligence Officers (CAIOs) to employ AI for innovative public engagement, ensuring comprehensive representation in policy making. Such AI-enabled public engagement tools can help to support broad and diverse public participation, helping agencies improve how they use AI.
- AI Solutions for Rulemaking: Agencies to implement AI solutions that facilitate and enhance public participation in rulemaking, harnessing AI’s ability to analyze public sentiment, manage feedback across diverse demographics, and extract ideas to inform how rules are crafted.
- Combining AI with Collective Intelligence: Agencies to experiment with merging artificial intelligence and collective intelligence, aiming to accelerate problem-solving and deepen democratic processes. This approach blends machine precision with human insight.
- Opening Data to Grow the AI Economy and Improve Governance: Agencies intensify efforts to open government data, ensuring it is in machine-readable formats to facilitate the training of AI models. This involves providing the necessary resources to federal agencies for creating and publishing data, thereby fulfilling the Open Government Data Act's mandate, and leveraging taxpayer-funded data to enhance AI robustness and innovation.
- AI Training for Public Servants: Congress should implement free, accessible AI training for all federal employees, beyond senior officials, focusing on practical applications of AI in government services. This training should be centralized for ease of access, like Germany's Digital Academy, and include comprehensive data collection on training outcomes to evaluate its impact on public service efficiency and innovation.
Conclusion
There is much hand wringing about possible devastating consequences of artificial intelligence. Many tech professionals have signed a manifesto that alarmingly declares: “Contemporary AI systems are now becoming human-competitive at general tasks, and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization?”
But the doomsday prognosticating is only half the story. These remarkable technologies for organizing information could hold the key to designing better institutions, capable of listening and learning more efficiently, and responding more effectively to the challenges of our times. If we want to realize the benefits of these powerful technologies for improving governance, strengthening resident engagement, and deepening democracy, we must invest more in creating that future.
Notes
[1] Non-Endorsement: The technologies referenced in this document are discussed as examples of artificial intelligence tools in use in government to improve customer experience. Their mention does not constitute an endorsement of the technologies or the companies behind them. I have no relationship to and derive no financial benefit from these firms.
[2] For a complete bio, please see Beth Simone Noveck, https://thegovlab.org/beth-simone-noveck.html, accessed January 7, 2024.
[3] IBM, What is Artificial Intelligence?, available at https://www.ibm.com/topics/artificial-intelligence, accessed January 7, 2024.
[4] Steven Zeitchik, “Is artificial intelligence about to transform the mammogram?,” Washington Post, Dec 21, 2021, https://www.washingtonpost.com/technology/2021/12/21/mammogram-artificial-intelligence-cancer-prediction..
[5] Traffic Jam, https://www.marinusanalytics.com/traffic-jam, accessed January 7, 2024.
[6] Pi, https://pi.ai/talk, accessed January 7, 2024.
[7] For more about the AI for Impact Coop, where students work full-time for six months on paid civic AI projects accompanied by a course on product-based learning and ethical AI, see The Burnes Center for Social Change at https://burnes.northeastern.edu/ai-for-impact-coop/, accessed January 7, 2024.
[8]Beth Simone Noveck, “Boston Isn’t Afraid of Generative AI
The city’s first-of-its-kind policy encourages its public servants to use the technology—and could serve as a blueprint for other governments,” Wired, May 19, 2023, https://www.wired.com/story/boston-generative-ai-policy.
[9] Ben Weiss, “Can AI fix Wall Street’s ‘spaghetti code’ crisis? Microsoft and IBM are betting that it can,” Fortune, October 9, 2023, https://fortune.com/2023/10/09/generative-ai-cobol-code-wall-street-ibm-microsoft.
[10] Eirini Kalliamvakou, “Quantifying GitHub Copilot’s impact on developer productivity and happiness,” Github, September 7, 2022, https://github.blog/2022-09-07-research-quantifying-github-copilots-impact-on-developer-productivity-and-happiness.
[11] Press Release: “ALERTCalifornia and CAL FIRE’s fire detection AI program named one of TIME’s Best Inventions of 2023: The artificial intelligence tool improves firefighting capabilities through the use of actionable, real-time data,” October 24, 2023, https://alertcalifornia.org/alertcalifornia-and-cal-fires-fire-detection-ai-program-named-one-of-times-best-inventions-of-2023.
[12] Rose Eveleth, “Academics Write Papers Arguing Over How Many People Read (And Cite) Their Papers
Studies about reading studies go back more than two decades,” Smithsonian, March 25, 2014, https://www.smithsonianmag.com/smart-news/half-academic-studies-are-never-read-more-three-people-180950222.
[13] Citizens Briefing Book is described in Beth Simone Noveck, Smart Citizens, Smarter State, Harvard University Press, 2016.
[14] Katia Savchuk, “Generative AI Can Boost Productivity Without Replacing Workers,” Insights by Stanford Business, https://www.gsb.stanford.edu/insights/generative-ai-can-boost-productivity-without-replacing-workers citing Erik Brynjolfsson, Danielle Li, and Lindsey R. Raymond, “Generative AI at Work,” NBER Working Paper No. 31161, National Bureau of Economic Research, Cambridge, MA, April 2023, revised November 2023, DOI: 10.3386/w31161
[15] US Citizenship and Immigration Services, “Hello, I'm Emma. How may I help you?,” April 17, 2019, https://www.youtube.com/watch?v=9MQFewDeaCM.
[16] National Center for Education Statistics, “Students with Disabilities,” last updated May 2023, https://nces.ed.gov/programs/coe/indicator/cgg/students-with-disabilities.
[17] A-IEP, https://a-iep.com, accessed January 7, 2024.
[18] Ibid.
[19] Phone interview with Santiago Garces, January 4, 2024 (notes on file with author).
[20] Phone interview with Santiago Garces, January 4, 2024 (notes on file with author).
[21] Isabelle Lieuw, “SCDF turns to artificial intelligence to help emergency call dispatchers.” Straits Times, July 9, 2018, https://www.straitstimes.com/singapore/scdf-turns-to-artificial-intelligence-to-help-emergency-call-dispatchers.
[22] Parliament of India, https://sansad.in/, accessed January 7, 2024.
[23] “Artificial Intelligence: Innovation in parliaments,” Inter-Parliamentary Union Innovation tracker, Issue 4, Feb 12, 2020, https://www.ipu.org/innovation-tracker/story/artificial-intelligence-innovation-in-parliaments.
[24] “Bringing the world closer together with a foundational multimodal model for speech translation,” Meta AI Blog,
August 22, 2023.
[25] Colin Wood, “Meet PAIGE, San Francisco’s promising young IT procurement chatbot,” StateScoop, March 6, 2018, https://statescoop.com/san-francisco-procurement-chatbot.
[26] “AI to Improve the Customer and Employee Experience, Clerk of the Superior Court of Maricopa County,” https://cocappagents.maricopa.gov/experience/index.html, accessed January 7, 2024.
[27]REPORT: “Government by Algorithm: Artificial Intelligence in Federal Administrative Agencies, Administrative Conference of the United States, February 19, 2020, https://www.acus.gov/document/government-algorithm-artificial-intelligence-federal-administrative-agencies.
[28] Darrell M. West, “Using AI and machine learning to reduce government fraud,” Brookings, September 10,2021, https://www.brookings.edu/articles/using-ai-and-machine-learning-to-reduce-government-fraud/
[29] Report of the Special rapporteur on extreme poverty and human rights, 11 October 2019, https://www.ohchr.org/Documents/Issues/Poverty/A_74_48037_AdvanceUneditedVersion.docx
[30] https://rebootdemocracy.ai/blog/ai-in-gov
[31]Caroline Sinders, “People think in problems: How Amsterdam is developing civic AI to address citizens' service requests,” Accelerate with Google Blog, https://accelerate.withgoogle.com/stories/people-think-in-problems-how-amsterdam-is-developing-civic-ai-to-address-citizens-service-requests, accessed Jan 7, 2024.
[32] Adele Peters, “Google launches Flood Hub in the U.S., which predicts when rivers will flood and warns people to evacuate,” Fast Company, October 10, 2023, https://www.fastcompany.com/90964575/google-launches-flood-hub-in-the-u-s-which-predicts-when-rivers-will-flood-and-warns-people-to-evacuate.
[33] Jinqin Gao, “AI-Based Video Analytics for Vehicle and Pedestrian Detection, Tracking, and Speed Estimation Using Traffic Cameras: Applications and Opportunities,” C2Smart, November 16, 2022, https://www.nyc.gov/assets/ddc/downloads/town-and-gown/VisionZeroPartV/Session%20C%20-%20Speeding%20-%20Jannie%20Gao.pdf.
[34] “Fed Figures: COVID-19 and the Federal Workforce,” Partnership for Public Service, https://ourpublicservice.org/fed-figures/fed-figures-covid-19-and-the-federal-workforce, accessed January 7, 2024.
[35]“FDA at a Glance,” https://www.fda.gov/about-fda/economics-staff/fda-glance and “FDA Detail of Full-Time Equivalents,” Food and Drug Administration, https://www.fda.gov/media/132813/download?attachment, accessed Jan 7, 2024.
[36] “Burden of Foodborne Illness: Overview,” Centers for Disease Control, https://www.cdc.gov/foodborneburden/2011-foodborne-estimates.html, last updated November 5, 2018.
[37] Steve Balla, Reeve Bull, Bridget Dooling, Emily Hammond, Michael Herz, Michael Livermore, & Beth Simone
Noveck, Mass, Computer-Generated, and Fraudulent Comments (June 1, 2021) (report to the Admin. Conf. of the U.S.), https://www.acus.gov/sites/default/files/documents/Final%20Report%20on%20Mass%2C%20Computer-Generated%2C%20and%20Fraudulent%20Comments%20%28Final%2006-01-2021%29_0.pdf.
[38] Beth Simone Noveck, “AI for the People: A Federal Mandate for Inclusive Engagement,” Reboot Democracy Blog, November 3, 2023, https://rebootdemocracy.ai/blog/ai-for-the-people-a-federal-mandate-for-inclusive-engagement.
[39]Ibid.
[40] I am an unpaid member of Citizen Lab’s advisory board. Citizen Lab, https://www.citizenlab.co, accessed January 7, 2024.
[41] “DIPAS: Digitale Bürgerbeteiligung weiter denken,” https://dipas.org, accessed January 7, 2024.
[42] See Citizens Foundation, https://www.citizens.is, accessed January 7, 2024.
[43] See Urbanist AI, https://urbanistai.com, accessed Januay 7, 2024.
[44] See Car Free America, Instagram, https://www.instagram.com/car_free_america/reel/CxtKn7Uu7MX , accessed Januay 7, 2024.
[45] Beth Simone Noveck, “How AI Could Restore Faith in Our Democracy, Fast Company, January 9, 2024, https://fastcompany.com/91001497/ai-faith-in-democracy.
[46] See Unlocking Literacy, https://unlockingliteracy.ai, accessed January 7, 2024.
[47] State of New Jersey Interim Guidance on the Use of Generative AI, NO.: 23-OIT-007, September 19, 2023, https://www.nj.gov/circulars/23-oit-007.pdf.
[48] Apostol Vassilev, Alina Oprea, Alie Fordyce, Hyrum Anderson, “NIST Trustworthy and Responsible AI NIST AI 100-2e2023 Adversarial Machine Learning A Taxonomy and Terminology of Attacks and Mitigations,” NIST, January 4, 2024, https://doi.org/10.6028/NIST.AI.100-2e2023.
[49] Kevin Schaul, Szu Yu Chen and Nitasha Tiku, “Inside the secret list of websites that make AI like ChatGPT sound smart” Washington Post, April 19, 2023, https://www.washingtonpost.com/technology/interactive/2023/ai-chatbot-learning/.
[50] Michael M. Grynbaum and Ryan Mac, “The Times Sues OpenAI and Microsoft Over A.I. Use of Copyrighted Work Millions of articles from The New York Times were used to train chatbots that now compete with it, the lawsuit said,”The New York Times, December 27, 2023, https://www.nytimes.com/2023/12/27/business/media/new-york-times-open-ai-microsoft-lawsuit.html.
[51] Beth Simone Noveck, “Better Government Tech Is Possible
There's so much potential for the government to use technology to improve the lives of citizens. It starts with acknowledging the importance of training,” Wired, June 20, 2023, https://www.wired.com/story/government-technology-artificial-intelligence.
[52] InnovateUS, https://innovate-us.org, accessed January 7, 2024.
[53] Digitalakademie, https://www.digitalakademie.bund.de/, accessed January 7, 2024.