At the European Cities conference in Vienna in November, the theme was “digital sovereignty,” shorthand for Europe’s growing anxiety about how to stave off American AI hegemony. The mood felt prescient. Only days later, the White House unveiled its sweeping new Genesis AI program whose explicit aim is to “win the race for global technology dominance.”
Against this backdrop, Italy has become the first country in Europe to enact a national AI law (Law no. 132/2025) trying to go beyond the mere implementation of the EU AI Act (Regulation EU 2024/1689), with the declared ambition of introducing a suite of sovereignty-focused protections to keep Italy’s data, public institutions, workers, and democratic processes under Italian control.
The law refers to the EU AI Act in Article 1. It establishes general principles that promote the human-centric, transparent, responsible, and safe use of AI, in line with fundamental rights. The new regulation also expressly recalls the principles of transparency, proportionality, security, protection of personal data, confidentiality, accuracy, non-discrimination, gender equality, and sustainability.
Unlike the U.S. approach, which focuses on corporate partnerships and rolling back safety and ethics regulations, Italy’s law seeks to establish new democratic guardrails for trust and for how AI may be developed and deployed in specific sectors.
At the same time, Italian law lacks vision and ambition. There is no clear articulation of goals, such as solving real-world problems like literacy or public health, training workers and children, advancing scientific research, improving governance, or deepening democracy. There is no scheme to lead in the development of public AI models and no investment in compute or open data. The budget, relative to the US or China, is paltry.
Italy may not be able to “win the race” against the U.S. or China with its strategy. Still, it is trying to change the rules of the race by asserting a European philosophy of digital humanism over the logic of technological supremacy.
Three provisions illustrate this shift:
-
First, Italy requires that artificial intelligence may support administrative procedures but never replace the authority of the human official, who remain solely responsible for decision-making; in judicial contexts, decision-making remains reserved to judges.
-
Second, it prohibits children under 14 from accessing AI systems without parental consent.
-
Third, it creates a national Worker Observatory to guide a long-term strategy for worker-centered AI adoption.
If Genesis represents the American model—centralized, closed, and aligned with large corporate actors—Italy’s law, following the EU Act, articulates a distinctly European counter-model: distributed, deeply human-centered, and grounded in government oversight rather than technological dominance.
What follows are notable features of Italy’s approach that will be worth exploring as they are rolled out and implemented.
Features of Italy’s AI Law
1. Worker-Focused AI
The law imposes a specific obligation: every worker must be informed when AI tools are used to organize, evaluate, or manage their work.
Italy creates an Osservatorio sull’adozione dell’AI nel mondo del lavoro, a national body housed in the Ministry of Labor with authority to map the impact of AI on employment, identify sectors at risk of disruption, recommend regulatory and training interventions, and shape a nationwide strategy for worker-centered AI adoption.
This goes beyond the EU AI Act, which includes transparency requirements but lacks a national labor-oversight mechanism. Italy is institutionalizing continuous monitoring of AI’s effect on jobs. If it works (even though the unions declare themselves dissatisfied, lacking protective measures against automation), it could become a model for how governments track, anticipate, and prepare for AI-driven economic change. If it doesn’t, it risks becoming symbolic. But the ambition is unmistakable: workers are not an afterthought in Italy’s AI strategy—they’re the starting point.
2. A New Criminal Offense for Harmful Deepfakes
Italy introduces a standalone crime for the unlawful dissemination of AI-generated or AI-altered images, videos, or audio, punishable by one to five years in prison. The key is not the manipulation itself, but the act of publishing or distributing content that is likely to deceive and cause unjust harm.
Unlike most countries, which patch deepfake harms into existing harassment or fraud statutes, Italy treats the malicious use of generative AI as its own category of wrongdoing. This turns deepfake abuse from a regulatory problem into a criminal justice priority. It’s unclear if this will capture with it categories of speech and expression that should be protected and how it will be enforced, but it sends a deterrent signal.
3. Criminal Penalties for Unlawful Data Scraping
The law criminalizes violations of text-and-data mining (TDM) rules when conducted through AI systems. This is a serious escalation: Italy is the first country to attach criminal liability to data extraction for AI training when it violates copyright owners’ rights or opt-out provisions.
In practice, this could be one of the law’s sharpest sovereignty tools. It aims squarely at large foreign AI companies whose business models depend on scraping enormous volumes of data—much of it without meaningful consent. Italy is attempting to close the training-data frontier that U.S. companies have freely exploited.
4. Mandatory Parental Consent for AI Use Under 14
Children under 14 cannot access AI systems without parental consent. Those 14–18 may consent independently only if information is provided in language that is “clear and comprehensible.”
No European country has applied an age-specific restriction to AI systems (as opposed to “digital services”).
This provision is both ambitious and ambiguous: AI is already embedded in phones, voice assistants, games, and homework apps. Enforcement will be challenging, but the legal intent is bold: protect minors first, figure out implementation second.
5. Lawyers in the loop
Regulated professions—such as lawyers and accountants—must also disclose to clients when they use AI systems in the course of their work. The aim mirrors that of other sectors: to ensure transparency and to reinforce that final decisions and legal responsibility remain with the human professional.
The provision is fundamentally about protecting the fiduciary relationship; clients must be able to trust that the person advising them is accountable for the outcome. Although professional bodies like the American Bar Association have issued guidance encouraging such disclosure, no comparable requirement has been formally codified in U.S. law.
6. Humans Are Always Responsible in Public Administration Decision-Making
AI should be used only as a tool to support decisions, never to replace human judgment. The human official remains fully and legally responsible for every administrative decision. Furthermore, the law expressly establishes that, in judicial activity, decision-making remains the exclusive prerogative of judges and therefore cannot be delegated to AI.
This is a remarkable departure from prevailing practice in the U.S., the UK, and even much of Europe, where automated eligibility systems and algorithmic decision-support routinely drift into de facto automation. Italy’s law draws a bright line: the state must always remain accountable through a human face. Automation in government is not a technical choice; it is a democratic one.
7. Doctors Decide
Patients have the right to be informed about the use of artificial intelligence technologies. Artificial intelligence systems in healthcare can support prevention, diagnosis, treatment, and therapeutic decision-making, but the final call is always left to doctors.
8. Human Contribution Standard for Copyright Protection
Italy amends the 1941 Copyright Law to explicitly state that a work created with AI assistance is only copyrightable if it reflects substantial human intellectual contribution. However, it will be necessary to understand, in concrete terms, how to assess the relevance of human contribution. This aligns with the human-authorship doctrine emerging in U.S. policy.
9. Sector-Specific AI Safeguards (Healthcare, Labor, Justice, Public Administration)
Where the EU AI Act regulates by risk, Italy regulates by sector. In areas deemed foundational to democratic life—healthcare, labor, justice, and public administration—Italy imposes additional rules regardless of risk classification.
This is a philosophical stance: some parts of public and civic life are too important to leave to generic risk frameworks or to the discretion of private-sector developers.
The law reads not as a technocratic regulation but as an affirmation that AI must be adapted to democratic institutions, not the other way around.
10. Two Government Regulators, Neither Independent
The Italian AI Law identifies Agenzia per l’Italia Digitale as the notifying authority–to promote AI development and for the assessment, accreditation, and monitoring of conformity assessment bodies, and Agenzia per la Cybersicurezza Nazionale with the role of supervisory authority for oversight, including inspections and sanctions.
This choice has attracted intense criticism, and a key issue highlighted is that the authorities responsible for AI oversight are under government control and are not independent administrative authorities, as in neighboring Spain.
Why Italy Matters
While Italy may be undergoing democratic backsliding, in an effort to break decisively with the American arms-race mentality, the country’s AI law might advance a more critical approach to AI. The statute does not treat AI as a force to be accelerated at all costs.
Whether Italy has struck the right balance between innovation and protection remains an open question.
Much will hinge on how the law is implemented, interpreted, and enforced in the months ahead. Without a significant budget (only 1B € to be sourced from existing funds) and without ambitious goals, the law doesn’t go as far as it could.
But Italy’s law is (trying to be), at its core, an argument that in a democratic society, sovereignty cannot be reduced to picking domestic corporate winners. It must mean advancing our shared public values above all else.