The EU AI Act establishes a comprehensive regulatory framework, delineating who must comply and under what conditions. Article 2 is pivotal, extending its reach extraterritorially to providers and deployers worldwide whose AI systems impact the European Union.
25 May 2024 marked a critical juncture for artificial intelligence governance, as the EU AI Act formally entered into force. This landmark legislation, particularly its Article 2, meticulously defines the scope of application, addressing the fundamental question of who is obligated to comply and in which specific situations the law applies or does not apply. Understanding these provisions is paramount for any entity developing, deploying, or interacting with AI systems that touch the European market.
Actors and Obligations: Navigating the AI Ecosystem
Article 2(1) of the EU AI Act identifies a diverse array of stakeholders within the AI ecosystem, each bearing distinct responsibilities. This broad classification ensures comprehensive oversight across the entire AI supply chain, from development to deployment and beyond.
Providers of AI Systems or General-Purpose AI Models
These are the entities that develop AI systems or general-purpose AI models and place them on the EU market. Prominent examples include OpenAI, Google, and Anthropic, alongside numerous European AI startups. Providers carry the primary regulatory responsibility.
Their obligations include:
- AI risk classification
- Ensuring compliance with safety requirements
- Fulfilling transparency obligations
- Maintaining documentation and technical records
- Conducting conformity assessments
A key legal insight is the extraterritorial reach: the Act applies even if the provider is located outside the EU, provided the system is marketed within the Union. For instance, a US company selling an AI recruitment tool to a firm in France must adhere to the EU AI Act.
Deployers of AI Systems
Deployers are entities that use AI systems in their operations. This category encompasses a wide range of organizations, such as law firms utilizing AI legal research tools, banks employing AI for credit scoring, and companies using AI in hiring decisions. A law firm using AI for contract review directly becomes a deployer.
Their responsibilities include:
- Using AI according to provider instructions
- Ensuring adequate human oversight
- Monitoring system performance
- Reporting incidents
- Protecting fundamental rights of affected individuals
Extraterritorial Scope for Non-EU Entities
One of the most significant aspects of the EU AI Act is its extraterritorial jurisdiction, mirroring the General Data Protection Regulation (GDPR). The regulation applies when the output of an AI system is used in the EU, irrespective of the provider's location. For example, if a Singaporean company develops an AI system evaluating loan applications, and its results are used by a German bank, the EU AI Act applies.
Importers and Distributors
Importers are entities that bring AI systems from outside the EU into the EU market. A European company importing an AI-powered medical tool from the US exemplifies this role. Their duties involve verifying compliance, ensuring CE conformity, and checking documentation.
Distributors are entities that make AI systems available in the market without modification. This includes software marketplaces, resellers, and SaaS platforms. They are responsible for ensuring compliance documentation, verifying CE marking, and cooperating with authorities.
Product Manufacturers Integrating AI
Companies that embed AI into their own products also fall under the Act. Examples include car manufacturers using AI in autonomous driving or medical device companies integrating diagnostic AI. These manufacturers become AI providers because they place the AI-integrated product on the market under their own brand, assuming legal responsibility for the AI system.
Authorised Representatives and Affected Persons
Authorised Representatives are EU-based legal representatives for non-EU companies, acting as a crucial contact point with regulators and maintaining technical documentation. This role is comparable to GDPR Article 27 representatives.
Affected Persons represent a rights-based category, encompassing individuals in the EU whose rights may be impacted by AI systems. This includes job applicants evaluated by AI or individuals subject to biometric identification. They gain rights such as transparency, explanation, and access to redress mechanisms.
Specific Exclusions and Overlapping Regulations
While broad in scope, the EU AI Act also outlines specific exclusions and clarifies its interaction with existing legal frameworks to prevent regulatory duplication and ensure coherence.
High-Risk Systems in Regulated Products
Article 2(2) addresses AI systems already regulated under existing EU product laws, such as medical devices or aviation systems. For these, only certain parts of the AI Act apply, including Article 6 (risk classification), enforcement provisions, and conformity procedures. This approach avoids redundant regulation.
National Security, Military, and Defence
Article 2(3) explicitly excludes AI used exclusively for military purposes, defence systems, or national security operations. This carve-out recognizes Member State sovereignty in these sensitive domains, meaning AI in military drones or intelligence analysis remains outside the Act's purview.
International Cooperation and Law Enforcement
Under Article 2(4), the regulation does not apply to AI used by foreign public authorities or international organizations for law enforcement cooperation with the EU, provided adequate safeguards are in place. This facilitates international collaboration while maintaining protective measures.
Interaction with Digital Services Act and Data Protection
Article 2(5) clarifies that the EU AI Act does not override the liability rules of Regulation (EU) 2022/2065 — the Digital Services Act (DSA), ensuring consistency in platform responsibility. Furthermore, Article 2(7) confirms that the AI Act does not replace privacy laws. Key legislation like the GDPR (2016/679) and the ePrivacy Directive continue to apply, meaning AI systems processing personal data must comply with both frameworks.
Research, Development, and Personal Use Exemptions
Article 2(6) provides an exemption for AI systems developed only for scientific research purposes, though commercialization nullifies this exclusion. Similarly, Article 2(8) exempts early-stage development activities like testing and model training, though real-world testing may still be regulated. Article 2(10) exempts individuals using AI for personal, non-professional purposes, such as using ChatGPT at home, while professional use remains regulated.
Worker Protection and Open-Source AI
Article 2(11) allows Member States to adopt stronger protections for workers against AI use in workplaces, addressing concerns around algorithmic management and AI-based hiring tools. Finally, Article 2(12) generally exempts open-source AI models, except when they fall into high-risk categories (e.g., biometric identification, employment decisions) or are subject to Article 5 (prohibited AI) or Article 50 (transparency obligations).
Key Takeaways
- The EU AI Act establishes a global regulatory model, applying to AI placed on the EU market, used in the EU, or affecting people in the EU, regardless of the provider's location.
- Providers bear primary responsibility for compliance, including risk classification, safety, and transparency, even if based outside the EU.
- Deployers must ensure human oversight, monitor performance, and protect fundamental rights when using AI systems.
- Specific exemptions exist for national security, military use, scientific research, and personal non-professional use, alongside clarifications on interactions with existing EU laws like GDPR and DSA.
- Importers, distributors, product manufacturers integrating AI, and authorised representatives all have defined roles and responsibilities within the supply chain.
What Comes Next
The comprehensive scope defined in Article 2 of the EU AI Act signals a new era of accountability for artificial intelligence. As the regulation's provisions become fully applicable, particularly for high-risk systems, businesses globally will need to meticulously review their AI development and deployment strategies. The extraterritorial reach will necessitate a harmonized approach to compliance, pushing international standards and potentially influencing AI governance beyond Europe's borders. Future developments will likely focus on the practical implementation of these rules, the emergence of best practices, and the potential for further clarification through guidance from EU authorities, shaping the future of responsible AI innovation.
Key Highlights
The EU AI Act's Article 2 defines its comprehensive scope, impacting global AI stakeholders.
Providers and deployers, regardless of location, must comply if their AI systems impact the EU market or its citizens.
The Act includes extraterritorial jurisdiction, similar to GDPR, extending obligations beyond EU borders.
Specific exemptions cover national security, research, and personal use, while existing laws like GDPR remain applicable.
A structured framework assigns distinct responsibilities to providers, deployers, importers, distributors, and product manufacturers.


