Artificial Intelligence Act
The Artificial Intelligence Act is a European Union regulation concerning artificial intelligence. It establishes a common regulatory and legal framework for AI within the European Union. The regulation entered into force on 1 August 2024, with provisions that shall come into operation gradually over the following 6 to 36 months.
It covers most AI systems across a wide range of sectors, with exemptions for AI used only for military, national security, research purposes, or for non-professional use. As a form of product regulation, it does not create individual rights; instead, it places duties on AI providers and on organisations that use AI in a professional context.
The Act classifies non-exempt AI applications by their risk of causing harm. There are four levels – unacceptable, high, limited, minimal – plus an additional category for general-purpose AI.
- Applications with unacceptable risks are banned.
- High-risk applications must comply with security, transparency and quality obligations, and undergo conformity assessments.
- Limited-risk applications only have transparency obligations.
- Minimal-risk applications are not regulated.
The Act also creates a European Artificial Intelligence Board to promote national cooperation and ensure compliance with the regulation. Like the EU's General Data Protection Regulation, the Act can apply extraterritorially to providers from outside the EU if they have users within the EU.
Proposed by the European Commission on 21 April 2021, it passed the European Parliament on 13 March 2024, and was unanimously approved by the EU Council on 21 May 2024. The draft Act was revised to address the rise in popularity of generative artificial intelligence systems, such as ChatGPT, whose general-purpose capabilities did not fit the main framework.
Provisions
Risk categories
There are different risk categories depending on the type of application, with a specific category dedicated to general-purpose generative AI:- Unacceptable risk – AI applications in this category are banned, except for [|specific exemptions]. When no exemption applies, this includes AI applications that manipulate human behaviour, those that use real-time remote biometric identification in public spaces, and those used for social scoring.
- High-risk – AI applications that are expected to pose significant threats to health, safety, or the fundamental rights of persons. Notably, AI systems used in health, education, recruitment, critical infrastructure management, law enforcement or justice. They are subject to quality, transparency, human oversight and safety obligations, and in some cases require a Fundamental Rights Impact Assessment before deployment. A Fundamental Rights Impact Assessment is an ex ante review to identify and mitigate potential impacts on fundamental rights before an AI system is deployed. Earlier work on algorithmic impact assessments has suggested that such tools should identify which individuals and communities are affected by an automated system, describe possible harms, and provide a basis for public and institutional scrutiny of its use. They must be evaluated both before they are placed on the market and throughout their life cycle. The list of high-risk applications can be expanded over time, without the need to modify the AI Act itself. Citizens also have a right to submit complaints about AI systems and to receive explanations of decisions made by high-risk AI that affect their rights.
- Limited risk – AI systems in this category have transparency obligations, ensuring users are informed that they are interacting with an AI system and allowing them to make informed choices. This category includes, for example, AI applications that make it possible to generate or manipulate images, sound, or videos.
- Minimal risk – This category includes, for example, AI systems used for video games or spam filters. Most AI applications are expected to fall into this category. These systems are not regulated, and Member States cannot impose additional regulations due to maximum harmonisation rules. Existing national laws regarding the design or use of such systems are overridden. However, a voluntary code of conduct is suggested.
Added in 2023, the general-purpose AI category includes foundation models that can perform a wide range of tasks. If a model's weights and design are made open source, developers must publish a training data summary and a copyright policy; closed-source models must meet broader transparency requirements. High-impact models that pose systemic risks must undergo extra evaluation. A General-Purpose AI Code of Practice, published on 10 July 2025, outlines three main chapters on transparency, copyright, and safety and security to help providers demonstrate compliance with the AI Act. Participation in the code is voluntary.
Beyond these basic transparency duties, the Act sets a common list of obligations for providers of general-purpose AI models. They must publish a summary of the training data, adopt a policy to comply with copyright law, and provide technical documentation to downstream providers and supervisory authorities. Models that are designated as posing systemic risk must also carry out model evaluations and adversarial testing, assess and mitigate risks such as bias and security failures, report serious incidents, and ensure an adequate level of cybersecurity.
Exemptions
Articles 2.3 and 2.6 exempt AI systems used for military or national security purposes or pure scientific research and development from the AI Act. In particular, the Regulation does not apply where AI systems are used exclusively for military, defence or national security purposes, or to systems developed and put into service solely for scientific research and development; if such systems are later used for other purposes, the Act applies. These activities remain governed by separate EU and national rules on defence, security, and intelligence rather than by the AI Act itself.Article 5.2 bans algorithmic video surveillance of people only if it is conducted in real time. Exceptions allowing real-time algorithmic video surveillance include policing aims including "a real and present or real and foreseeable threat of terrorist attack".
Recital 31 of the Act states that it aims to prohibit "AI systems providing social scoring of natural persons by public or private actors", but allows for "lawful evaluation practices of natural persons that are carried out for a specific purpose in accordance with Union and national law." La Quadrature du Net interprets this exemption as permitting sector-specific social scoring systems, such as the suspicion score used by the French family payments agency Caisse d'allocations familiales.
Governance
The AI Act establishes various new bodies in Article 64 and the following articles. These bodies are tasked with implementing and enforcing the Act. The approach combines EU-level coordination with national implementation, involving both public authorities and private sector participation.The following new bodies will be established:
- AI Office: attached to the European Commission, this authority will coordinate the implementation of the AI Act in all Member States and oversee the compliance of general-purpose AI providers. It can also request information or open investigations when serious issues are suspected.
- European Artificial Intelligence Board: composed of one representative from each Member State, the Board will advise and assist the Commission and Member States to facilitate the consistent and effective application of the AI Act. Its tasks include gathering and sharing technical and regulatory expertise, providing recommendations, written opinions, and other advice.
- Advisory Forum: established to advise and provide technical expertise to the Board and the Commission, this forum will represent a balanced selection of stakeholders, including industry, start-ups, small and medium-sized enterprises, civil society, and academia, ensuring that a broad spectrum of opinions is represented during the implementation and application process.
- Scientific Panel of Independent Experts: this panel will provide technical advice and input to the AI Office and national authorities, enforce rules for general-purpose AI models, and ensure that the rules and implementations of the AI Act correspond to the latest scientific findings.
Once harmonised standards for the AI Act are published in the Official Journal, compliant products are presumed to conform with the regulation; these standards are being drafted by CEN/CENELEC Joint Technical Committee 21.