How Italy Applies the European AI Regulation and the Changes It Introduces for Citizens, Public Administration, and Businesses

Abstract: Law No. 132 of 29 September 2025 represents the implementation of the AI Act (EU Regulation 2024/1689), marking a decisive step in the regulation of artificial intelligence in Italy. The law defines the organizational and sanctioning framework needed to operationalize the European risk-based approach, identifying the competent authorities, establishing specific rules for sensitive sectors—such as public administration, public security, justice, and employment—and strengthening the protection of fundamental rights. The law also introduces a new criminal offence (“Unlawful dissemination of content generated or altered through artificial intelligence systems”), designed to counter the harmful spread of deepfake content. This contribution analyses the main provisions of the law, its relationship with the European framework, its operational implications for institutions and practitioners, and the emerging interpretative challenges, with particular focus on balancing technological innovation, security, and individual rights.
Keywords: #ArtificialIntelligence #AI #AILaw #AIAct #Law1322025 #AINorms #DeepFake #EuropeanAIRegulation #AIAttuation #AIRegulation #AIDiritto #AIEthics #RiskBasedApproach #ProhibitedAI #HighRiskAI #LimitedRiskAI #MinimalRiskAI #FundamentalRights #Privacy #HumanDignity #NonDiscrimination #DataProtection #AITransparency #HumanOversight #SergioBedessi #ethicasocietas #ethicasocietasreview #scientificjournal #ethicasocietasupli
Sergio Bedessi (1958), graduated in architecture, political science, and methodology and empirical research in the social sciences, he worked for many years in public administration, initially as a public works technician and later as chief of local police in various departments. He is a registered journalist and the author of over 30 books and hundreds of articles. He has notably published ‘Artificial Intelligence and Social Phenomena: Forecasting with Neural Networks’ for Maggioli. He currently serves as President of CEDUS – the Urban Security and Local Police Documentation Center, and teaches in numerous training programmes, including university courses.
THE NEW ITALIAN LAW ON ARTIFICIAL INTELLIGENCE AND ITS CONTENTS
Law No. 132 of 29 September 2025, “Provisions and Delegated Powers to the Government on Artificial Intelligence,” constitutes the Italian implementing legislation—and not transposition, as it is a self-executing regulation—of EU Regulation 2024/1689 of the European Parliament and of the Council, known as the AI Act.
Law 132/2025 represents a fundamental step toward making the European regulatory framework on artificial intelligence (AI – artificial intelligence) operational in Italy, thereby balancing technological innovation with the protection of citizens’ rights.
The new law introduces specific provisions for the implementation of the European AI rules within the Italian legal system, particularly with regard to:
-
identifying the national authorities responsible for the implementation of the European regulation (ACN – National Cybersecurity Agency; AGID – Agency for Digital Italy);
-
introducing a specific system of sanctions based on the rules of the European regulation, and identifying the authority responsible for supervision and enforcement (ACN).
Law 132/2025 pays particular attention to certain specific sectors considered sensitive in relation to the use of AI technologies:
-
public administration, establishing rules for the use of AI in the provision of public services;
-
security and public order, imposing limitations on the use of biometric systems;
-
the justice sector, establishing a principle of transparency when AI technologies are used to support judicial decision-making;
-
the labour sector, ensuring that workers are protected from automated hiring and personnel-management systems based on AI.
The Italian law also follows—consistent with the AI Act—the principles of protecting fundamental rights, with particular emphasis on safeguarding:
-
human dignity;
-
the right to non-discrimination;
-
personal data and, more generally, privacy,
while requiring continuous human oversight of automated systems.
From the perspective of sanctions, the law introduces a system of administrative monetary penalties for violations of the European AI Act, which already sets out the amounts:
-
fines of up to €35 million or 7% of global annual turnover for the most serious violations, i.e., prohibited systems;
-
fines of up to €15 million or 3% of turnover for violations of obligations concerning high-risk systems;
-
fines of up to €7.5 million or 1.5% of turnover for other violations.
It should be noted that Law 132/2025 is based on the European risk-based approach, and therefore provides the following classification of AI systems:
-
prohibited AI, for systems that manipulate human behaviour or exploit human vulnerability;
-
high-risk AI, subject to stringent conformity and documentation requirements;
-
limited-risk AI, subject to transparency obligations concerning system functioning;
-
minimal-risk AI, whose use does not entail specific obligations.
WHAT IS ARTIFICIAL INTELLIGENCE IN SIMPLE TERMS?
It is useful to provide some definitions before addressing the specific topic.
In simple terms, “artificial intelligence” can be defined as the discipline that deals essentially with “intelligent systems”—systems that exhibit intelligent behaviour, meaning the ability to autonomously apply certain knowledge to solve problems and subsequently increase such knowledge through self-learning.
One can speak of “artificial intelligence” only when the system autonomously learns from its own experience and not when human intervention is still required—even though, in practice, the distinction is somewhat less clear-cut.
In this sense, those who refer simplistically to “algorithms” when discussing artificial intelligence are mistaken.
In computer science, an “algorithm” is a set of instructions to be applied in order to carry out a computation or solve a problem—thus the opposite of what AI does, which learns from its own experience and does not provide results based on a fixed set of instructions.
AI software, of any type, is based on artificial neural networks (ANNs), which can be considered an information-processing system modelled on the human or animal brain.
ANNs consist of many simple processors (artificial neurons), each with a small amount of local memory; the neuron is the basic computational element, and knowledge is distributed throughout the network.
Since no commercially available computers currently use hardware made of artificial neural networks, traditional computers (including smartphones) must be used, running software that simulates a neural network and, to do so, employs algorithmic processing techniques.
High-level processing is, however, as far removed as possible from traditional algorithmic computation, because artificial neural networks learn, like natural ones, from experience.
THE ITALIAN AI LAW: NEW CRIMINAL OFFENCES
The Italian AI law affects the Local Police in several ways; one of the most relevant concerns the new criminal offences introduced.
Specifically, Article 26, paragraph 1, letter c) of Law 132/2025 introduces into the Italian Criminal Code the new Article 612-quater, establishing an autonomous criminal offence (“Unlawful dissemination of content generated or altered through artificial intelligence systems”) aimed at combating the unlawful spread of deepfake content:
“Anyone who causes unjust harm to another person by transferring, publishing, or otherwise disseminating, without their consent, images, videos, or voices falsified or altered through the use of artificial intelligence systems and capable of misleading as to their authenticity, shall be punished with imprisonment from one to five years. The offence is prosecutable upon complaint of the injured party. However, prosecution ex officio shall apply if the act is connected with another offence subject to mandatory prosecution, or if it is committed against a person who is incapable due to age or infirmity, or against a public authority because of the function exercised.”
Unlike the heading, which uses the terms “generated or altered”, the provision uses “falsified or altered”, raising doubts as to which behaviours are actually targeted.
While “generated”—requesting software to create an image, video, or any content—can be considered synonymous with “falsified”, “altered” presupposes modifying something that already existed, a behaviour that is potentially less severe but perhaps more insidious.
In any event, the offence lies not in the generation, falsification, or alteration itself, but in the transfer, publication, or dissemination without consent of such material.
The legislator’s intent to introduce a provision addressing behaviours otherwise not covered by the Criminal Code is understandable; the severity of the punishment—imprisonment from one to five years, which excludes the possibility of settlement—is less so.
It is noteworthy that imprisonment from one to five years is also imposed for offences of far greater social gravity, such as incitement or assistance to suicide (when the suicide does not occur but a serious or very serious injury results), solicitation of minors, and negligent homicide.
Close attention must therefore be paid to the constitutive elements of the new offence (conduct, object, mental element, specific requirements), particularly:
-
the capacity of the content to deceive, and
-
the existence of unjust harm.
Regarding unjust harm, it is legitimate to ask whether moral harm alone is sufficient or whether economic harm must also be demonstrated; similarly, assessing the capacity to deceive poses challenges, as establishing an evaluation standard based on an “average user” is not straightforward.
Issues also arise regarding the boundary between satire—where images or audio/video are often used—and criminal behaviour, especially considering the identity of the person engaging in the conduct (a private citizen, or satirical cartoonist for a newspaper, etc.).
The offence is prosecutable upon complaint by the injured party, but prosecution ex officio applies in three cases:
-
if the act is connected with another offence requiring mandatory prosecution;
-
if it is committed against a person incapable due to age or infirmity;
-
if it is committed against a public authority because of the functions exercised—
a broad concept likely to raise interpretative difficulties.
Scope of application
It is evident that the law aims to address:
-
pornographic deepfakes;
-
fake news produced through deepfakes of public figures;
-
voice manipulation for fraud or extortion;
-
fraudulent misuse of another person’s identity in digital contexts.
The offence is closely related to other criminal provisions (Art. 612-bis, 612-ter, 595, 494 of the Criminal Code).
Investigative activities will be complex and highly specialized: beyond traditional investigative tools, technical-IT skills will be essential to enable rapid action in collecting digital evidence, which is by nature extremely volatile.
Lastly, urgent measures within police activities must not be overlooked (immediate removal of content, personal precautionary measures).

LATEST CONTRIBUTIONS ON AI
THE IMPACT OF ARTIFICIAL INTELLIGENCE ON CORPORATE FINANCE
THE RISK OF RELIGIOUS BIAS IN ARTIFICIAL INTELLIGENCE
THE FUTURE OF FINANCE: THE IMPACT OF ARTIFICIAL INTELLIGENCE ON BUSINESS VALUATION
ARTIFICIAL INTELLIGENCE AND BIAS: LIMITS AND OPPORTUNITIES
GENERATIVE ARTIFICIAL INTELLIGENCE HOW DOES THOUGHT WILL CHANGE?
LATEST 5 CONTRIBUTIONS
“LIFE DOES NOT BELONG TO US”: MINISTER FLORES HERNÁNDEZ SPEAKS OF DIPLOMACY WITH A SOUL
TELEMEDICINE AS A PARADIGM OF TRANSFORMATION IN TERRITORIAL HEALTHCARE
THE MUNICIPALITY OF SANT’EGIDIO ALLA VIBRATA (TE) CONVICTED FOR MOBBING
LANGUAGE AS A TOOL OF VIOLENCE AND REDEMPTION
THE OCCUPATIONAL PHYSICIAN: THE INVISIBLE ALLY WHO PROTECTS THOSE WHO BUILD THE WORLD
Ethica Societas is a free, non-profit review published by a social cooperative non.profit organization
Copyright Ethica Societas, Human&Social Science Review © 2025 by Ethica Societas UPLI onlus.
ISSN 2785-602X. Licensed under CC BY-NC 4.0


