The EU AI Act establishes a far-reaching regulatory framework, defining who must comply and under what circumstances. Understanding Article 2 is crucial for entities both within and outside the European Union, from providers and deployers to affected individuals.
On 21 May 2024, the European Union Artificial Intelligence Act (EU AI Act) officially entered into force, initiating a phased implementation for its comprehensive regulatory framework. A cornerstone of this legislation is Article 2, which delineates the scope of application, addressing the fundamental question of who is subject to its provisions and in which specific situations. This article provides a structured explanation of the roles of various actors and the conditions under which the Act applies or is explicitly excluded.
Defining Actors: Providers, Deployers, and Their Obligations
The EU AI Act identifies a diverse array of stakeholders within the AI ecosystem, each bearing distinct responsibilities. Understanding these roles is paramount for compliance, particularly given the Act's extraterritorial reach.
Providers of AI Systems and General-Purpose AI Models
Providers are defined as entities that develop AI systems or general-purpose AI models and subsequently place them on the EU market. This category includes major technology companies such as OpenAI, Google, and Anthropic, alongside numerous European AI startups. Their responsibilities are extensive, encompassing the primary regulatory burden.
Providers must ensure:
- AI risk classification in accordance with the Act's provisions.
- Compliance with stringent safety requirements.
- Fulfillment of transparency obligations to users.
- Maintenance of comprehensive documentation and technical records.
- Completion of necessary conformity assessments.
A critical legal principle established here is the extraterritorial application: the EU AI Act applies even if the provider company is situated outside the EU, provided its system is marketed within the Union. For instance, a US company selling an AI recruitment tool to a French enterprise must adhere to the EU AI Act.
Deployers of AI Systems
Deployers are entities that utilize AI systems in their operational processes. This broad category includes diverse organizations such as law firms employing AI legal research tools, banks leveraging AI for credit scoring, and companies integrating AI into hiring decisions. Their obligations, while distinct from providers, are equally critical for ensuring responsible AI deployment.
Deployers are mandated to:
- Utilize AI systems strictly according to the provider's instructions.
- Ensure robust human oversight mechanisms are in place.
- Continuously monitor system performance for anomalies or biases.
- Report incidents promptly to relevant authorities.
- Actively protect fundamental rights of individuals affected by the AI system.
A law firm using AI for contract review, for example, assumes the role and responsibilities of a deployer under this framework.
Importers and Distributors
The Act also delineates roles for intermediaries in the supply chain. Importers are entities that bring AI systems from outside the EU into the EU market. Their responsibilities include verifying compliance, ensuring CE conformity marking, and checking all requisite documentation. Similarly, distributors are those who make AI systems available in the market without modifying them, such as software marketplaces or resellers. They must ensure compliance documentation is present, verify CE marking, and cooperate with authorities.
Product Manufacturers Integrating AI
Companies that embed AI into their own products, such as car manufacturers utilizing AI for autonomous driving or medical device companies incorporating diagnostic AI, are also brought under the Act's purview. These entities effectively become AI providers for the integrated AI system, bearing legal responsibility for its compliance when placing the product on the market under their brand.
Authorised Representatives and Affected Persons
For non-EU companies, the appointment of authorised representatives based in the EU is mandatory. These representatives serve as a crucial contact point with regulators, maintain technical documentation, and facilitate cooperation with market surveillance authorities, mirroring requirements found in GDPR Article 27. Furthermore, the Act explicitly recognizes affected persons — individuals in the EU whose rights may be impacted by AI systems, such as job applicants or those subject to biometric identification. These individuals gain rights to transparency, explanation, and redress mechanisms.
Specific Exemptions and Parallel Regulations
While the EU AI Act casts a wide net, Article 2 also specifies several important exemptions and clarifies its interaction with existing legal frameworks, preventing regulatory duplication and ensuring consistency.
National Security, Military, and Defence Exclusions
The Act explicitly does not apply to AI systems used exclusively for military purposes, defence systems, or national security operations. This exemption covers applications such as AI in military drones, intelligence analysis, and cyber defence, acknowledging that these areas remain under the sovereignty of individual Member States. Similarly, AI used by foreign public authorities or international organizations for law enforcement cooperation with the EU is excluded, provided adequate safeguards are in place.
Scientific Research and Development Exemption
AI systems developed solely for research purposes are exempt from the Act's provisions. This includes academic research models, experimental AI prototypes, and laboratory AI tools. However, this exemption is conditional: if such a system is subsequently commercialized or deployed for practical use, it then falls under the full scope of the regulation. This distinction encourages innovation while ensuring market-ready AI adheres to safety and ethical standards.
Interaction with Existing EU Law
The EU AI Act operates in concert with, rather than replacing, several established EU legal frameworks. For AI systems already regulated under existing EU product laws (e.g., medical devices, aviation systems), only specific parts of the AI Act apply, including Article 6 (risk classification) and enforcement provisions, thereby avoiding regulatory overlap. Furthermore, the Act does not override the liability rules of Regulation (EU) 2022/2065 — the Digital Services Act (DSA), ensuring intermediary liability rules and platform responsibility frameworks remain consistent.
Crucially, the Act also affirms that it does not replace existing privacy laws. This means AI systems processing personal data must comply with both the EU AI Act and the General Data Protection Regulation (GDPR 2016/679), along with the ePrivacy Directive and the Law Enforcement Data Protection Directive. This dual compliance underscores the EU's commitment to both AI safety and data protection. Similarly, the Act does not supersede existing consumer protection frameworks or product safety regulations, which continue to operate in parallel.
Limitations and Future Considerations
Beyond the primary scope, Article 2 also addresses several specific scenarios, clarifying where the Act's influence diminishes or where other regulations take precedence.
Personal Non-Professional Use and Worker Protections
Individuals using AI for purely personal, non-professional purposes are exempt. This allows for the use of tools like ChatGPT at home or AI image generation for personal projects without regulatory burden. However, any professional application of such tools remains subject to the Act. Furthermore, the Act acknowledges the potential for Member States to adopt stronger protections for workers against AI use in workplaces, particularly concerning AI monitoring, algorithmic management, or AI-based hiring tools. Collective agreements can also play a significant role in enhancing worker safeguards.
Open-Source AI Systems
Open-source AI models are generally exempt from the Act. This exemption aims to foster innovation and collaborative development within the open-source community. However, this general exemption has critical caveats: open-source models fall under the Act's purview if they are classified as high-risk (e.g., in biometric identification or employment decisions), or if they fall under Article 5 (prohibited AI practices) or Article 50 (transparency obligations). This nuanced approach balances innovation with necessary safeguards for critical applications.
Pre-Market Development Activities
Early-stage development activities, such as testing, research, and model training, are generally not subject to the regulation. This encourages experimentation and iterative development. Nevertheless, real-world testing of AI systems may still fall under regulatory scrutiny, especially if it involves high-risk applications or impacts individuals.
Key Takeaways
- The EU AI Act applies to a wide array of actors, including providers, deployers, importers, distributors, product manufacturers integrating AI, and authorised representatives, both within and outside the EU.
- Its extraterritorial scope means non-EU entities must comply if their AI systems are placed on or used in the EU market, or if they affect individuals within the EU.
- Specific exemptions exist for national security, military, defence, and purely scientific research, but these are narrowly defined.
- The Act complements, rather than replaces, existing EU legislation like the GDPR, DSA, and product safety laws, necessitating a multi-layered compliance approach.
- Open-source AI models are generally exempt, but critical exceptions apply for high-risk systems or those falling under prohibited practices.
What Comes Next
The phased implementation of the EU AI Act will progressively introduce its requirements, with obligations for prohibited AI systems taking effect first. Businesses globally must urgently assess their AI portfolios and operational practices against Article 2 to determine their specific roles and responsibilities. The Act establishes a new global standard for AI governance, compelling organizations to embed ethical and safety considerations into every stage of the AI lifecycle. Proactive engagement with these regulations will be critical for mitigating legal risks and maintaining market access in the European Union, shaping the future of responsible AI development and deployment worldwide.
Key Highlights
The EU AI Act's Article 2 defines a broad scope, covering providers, deployers, importers, and distributors.
The Act has extraterritorial reach, applying to non-EU entities whose AI systems are used or marketed in the EU.
Exemptions exist for national security, military, and purely scientific research, but with strict conditions.
The Act complements other EU laws like GDPR and DSA, requiring multi-layered compliance.
Open-source AI models are generally exempt, except for high-risk applications or prohibited uses.


