Loading blog post, please wait Loading blog post...

Listen to the AI-generated audio version of this piece.

00:00
00:00

New Paper: "Against the Uncritical Adoption of 'AI' Technologies in Academia," by Olivia Guest, Marcela Suarez, Iris van Rooij, and colleagues (2025)

Question Asked:

What happens to higher education, critical thinking, and academic integrity when universities adopt AI technologies without scrutiny? And is rejection a legitimate institutional response?

Why This Paper Matters

Universities are not passive observers of the AI moment. They are active sites of adoption, bundling chatbots into learning management systems, normalizing AI writing tools, and, under pressure from administrators, treating AI integration as a sign of institutional relevance. 

At a recent event hosted by the Santa Fe Institute, scholars presented AI-driven tools designed to support research: content safety systems built on commercial APIs, RAG-based knowledge databases to capture institutional memory, live-event augmentation tools, and software environments promising guardrails for high-stakes decision-making. The emphasis was on usability, transparency, and human–AI collaboration.

The discussion quickly shifted from tool design to procurement reality. Participants described decisions to adopt major commercial platforms from OpenAI, Microsoft, and Anthropic, often justified by integration convenience, existing vendor ecosystems, or perceived institutional “protection.” One administrator noted that because their university already ran on Microsoft infrastructure, choosing Microsoft’s AI tools felt inevitable.

The paper Against the Uncritical Adoption of ‘AI’ Technologies in Academia was raised in that context. Seventeen scholars across cognitive science, pedagogy, gender studies, computer science, and economics are pushing back, not against experimentation, but against unexamined dependency.

If higher education is struggling to maintain critical distance from commercial AI platforms, public agencies face the same pressure with less insulation.

For governments and public institutions, not just universities, the argument is directly relevant. If higher education is struggling to maintain critical distance from commercial AI platforms, public agencies face the same pressure with less insulation.

The Argument

Rather than presenting original empirical research, Guest et al. build a normative case through several interlocking lines of reasoning.

First, they argue that the term "artificial intelligence" is itself a marketing phrase, deliberately imprecise, designed to attract funding and deflect scrutiny, and that uncritical adoption of the terminology imports the hype along with it. The hype cycles of the 1950s, 1960s, and 1980s followed the same pattern. Each time, institutions that restructured around the technology were left holding the costs when the bubble deflated.

Their Euler diagram of overlapping AI terminology (LLMs, ANNs, generative models, chatbots) is a useful illustration of just how much conceptual fog surrounds routine procurement decisions.

Screenshot 2026 02 24 at 9.39.50 Am

Second, they apply the Netherlands Code of Conduct for Research Integrity — five principles of honesty, scrupulousness, transparency, independence, and responsibility — directly to current AI products. Their conclusion is that most commercially deployed AI systems fail on all five counts. Closed-source models cannot satisfy transparency. 

Vendor entanglement undermines independence. Environmental and labor costs imply responsibility. This isn't a new regulatory framework they're proposing; it's an argument that existing standards already prohibit much of what universities are normalizing.

This isn't a new regulatory framework they're proposing; it's an argument that existing standards already prohibit much of what universities are normalizing.

Third, they push back on several rhetorical moves that have become commonplace in institutional AI discourse: that students are all cheating anyway, that AI is just a calculator, and that teaching AI use prepares students for the job market. Each of these, they argue, serves industry interests more than educational ones.

Key Gaps

The paper is most convincing as a diagnosis. As a prescription, it is thin.

For a 17-author paper spanning pedagogy, cognitive science, and computer science, the constructive vision is surprisingly sparse. "Critical AI Literacy" is invoked as an alternative but never fully developed. 

Historical examples of successful technology rejection include Amsterdam banning cars, rejecting the idea that cycling through the Dutch capital should be deadly, and the Montreal Protocol, where governments banned chlorofluorocarbons from fridges to mitigate ozone depletion in the atmosphere. Gesturing toward the possibility, but does not provide an institutional strategy. 

The paper tells us what to stop doing with considerably more rigor than it tells us what to do instead. This is not a minor gap. 

Institutions facing real procurement decisions, real vendor contracts, and real political pressure to "embrace the future" need more than a principled refusal. They need alternative models, practical frameworks, and concrete examples of what democratic AI governance actually looks like in operation.

Reflection for Democracy

Guest et al. are asking the right question. When AI becomes infrastructure inside schools, hospitals, courts, and government agencies, the decisions made now about procurement, dependency, and oversight will shape public institutions for a generation. Treating those decisions as technical or administrative rather than political is itself a choice, and not a neutral one.

But the paper's prescription doesn't match the scale of its diagnosis. And for governments in particular, the option of principled refusal is simply not available in the way it might be for an individual academic choosing not to use a chatbot.

Public institutions are already behind. Citizens expect faster services, better responsiveness, and more accessible information. The efficiency gains from AI tools in consultation analysis, service delivery, document processing, and procurement review are real, documented, and increasingly difficult to ignore. A government that refuses to engage with these tools on principle is not taking a brave stand; it is abdicating its responsibility to the people it serves. 

The question was never whether to adopt AI. It was always who controls it, on whose terms, and with what accountability.

The question was never whether to adopt AI. It was always who controls it, on whose terms, and with what accountability.

That is where the critique in Guest et al. lands hardest and is most useful for practitioners. The structural risks they identify, such as vendor dependency, opaque systems, eroded institutional capacity, and the slow transfer of public functions to private platforms, are not hypothetical. 

They are already visible in the enterprise-wide state contracts with OpenAI and Anthropic, in procurement decisions driven by loss-leader pricing, in agencies building workflows on top of closed systems they cannot audit, modify, or move away from.

That gap motivates a set of questions we think deserve serious, practical attention:

  • What does it actually mean to treat AI as public infrastructure rather than a vendor relationship, and what institutional arrangements make that more than a slogan? 
  • When Maryland signs a government-wide contract with Anthropic or Massachusetts deploys ChatGPT across its executive branch, what leverage did they leave on the table, and what risks are now baked in? 
  • Is sovereignty through ownership realistic, or is sovereignty through interoperability and commodification the more viable path for most governments? 
  • At which layer of the stack — compute, models, orchestration, public digital services, governance standards — does public investment generate the most durable value? 
  • What can U.S. states and cities actually learn from Spain's ALIA project, AI Sweden, or Switzerland's federated data governance approach? 
  • What does responsible pragmatism look like: aggressive enough to deliver real service improvements, disciplined enough to preserve long-term public control?

These are the questions we will be working through this spring. Not because the critics are wrong, but because critique alone doesn't build the institutional capacity governments need. The window to shape this infrastructure, rather than simply inherit it, is closing faster than most public leaders realize.

Tags