Feb 26, 2026
Legal AI Journal
AI GovernanceFebruary 23, 2026

EU AI Act's Global Reach: Malicious Use & Regulatory Gaps

AI Research Brief| 12 min read|1 sources
European Parliament building in Brussels, symbolizing EU AI Act's global reach.

Illustration: Legal AI Journal

The EU AI Act, intended as a global blueprint for AI regulation, exhibits significant gaps in addressing malicious use risks. Its reliance on existing domestic and sectoral laws limits its exportability and effectiveness as a standalone framework. Policymakers must strengthen its coverage and adjust international engagement to reflect these limitations.

February 17, 2026, marks a critical juncture for global artificial intelligence governance, as the EU Artificial Intelligence Act (AI Act), one of the world's first binding AI regulations, faces scrutiny regarding its capacity to address the multifaceted threat of malicious AI use. While policymakers envisioned the Act as a global standard, leveraging the Brussels Effect, its uneven coverage of intentional harm-causing AI applications presents a significant challenge to this ambition. This analysis stress-tests the AI Act's provisions against nine identified malicious use sub-risks, revealing substantial regulatory gaps.

The Global Race for AI Dominance and Safety Efforts

The current geopolitical landscape prioritizes AI dominance over safety, with major powers focusing on technological leadership and capabilities. The US released America’s AI Action Plan in summer 2025, pursuing a largely hands-off regulatory approach to benefit its private sector, which led global private AI investment in 2024 with nearly US$110 billion. China, aiming for global AI leadership by 2030, employs coordinated industrial policy, with Chinese AI providers estimated to invest US$70 billion in data centers in 2026.

Europe's Competitive Stance and Regulatory Innovation

The EU responded with its AI Continent Action Plan in April 2025, mobilizing resources and announcing initiatives like 19 AI Factories and 13 AI Factory Antennas. Despite this competitive environment, the EU AI Act stands out as a legally innovative framework, regulating specific use cases based on risk intensity rather than the technology itself. This contrasts with other nations' broad, non-binding principles.

International Reception and the Brussels Effect

EU policymakers intended the AI Act to serve as a blueprint for global AI governance, relying on the Brussels Effect to encourage international adoption. Reactions have been mixed; while US public opinion shows support, China developed its own safety framework. India views the AI Act as inspiration, particularly its risk-based approach, but some Global South civil society organizations express concern about uncritically adopting European approaches. For the AI Act to succeed as a model, its regulatory quality in addressing critical risks is paramount.

Discrepancies in AI Risk Frameworks: Malicious Use Risks

The AI Act imposes obligations primarily on providers and deployers of AI systems, categorizing risks as unacceptable (prohibited), high-risk (subject to transparency, cybersecurity, and risk management), or those arising from lack of transparency. General-purpose AI (GPAI) models with systemic risks face even wider obligations, including stronger risk management and cybersecurity. However, this risk-intensity focus may not adequately cover the most critical risks, particularly those arising from malicious use.

Defining Malicious Use in AI

Malicious use of AI refers to intentional practices employing AI capabilities to compromise the security of individuals, groups, or society. Its defining element is the 'intent to cause harm,' distinguishing it from accidental misuse. This analysis identifies nine sub-risks, based on international AI safety organizations, policy reports, academia, and reported incidents:

  • Bioweapons and chemical threats: AI used to design novel pathogens or reproduce weapons.
  • Intentional rogue AIs: Autonomous systems with destructive goals, deployed without human oversight.
  • Disinformation and persuasive AIs: AI generating false content at scale or personalized persuasion.
  • Fake and abusive content: Generative AI creating non-consensual intimate imagery (NCII) or AI-generated child sexual-abuse materials (CSAM).
  • Fraud, scams and social engineering: AI systems like WormGPT or FraudGPT enhancing deception.
  • Cyber offence: AI automating malware generation, vulnerability discovery, and phishing.
  • Autonomous weapons and military use: AI-enabled weapon systems targeting without human oversight.
  • Concentration of power: Governments or corporations misusing AI to entrench authority.
  • State surveillance and oppression: AI enabling mass surveillance, predictive policing, and censorship.

Uneven Coverage of Malicious Use Risks in the AI Act

The AI Act's coverage of these malicious use sub-risks is highly uneven, weakening its internal coherence and global model potential. Four sub-risks receive minimal or no direct coverage, four are only partially addressed, and only one is extensively covered.

Significant Gaps in Direct Coverage

Four sub-risks receive no direct coverage or only incidental overlap:

  • Bioweapons and chemical threats: Only generic provisions for GPAI models with systemic risks apply. International conventions remain the primary safeguard.
  • Intentional rogue AIs: Mitigation is limited to risk management and incident reporting for GPAI models with systemic risks, with human oversight obligations on high-risk AI systems offering some indirect protection.
  • Autonomous weapons and military use: Explicitly excluded from the AI Act's scope, as defense and national security are Member State competences. Only dual-use AI systems (military and civilian) fall under the Act.
  • Concentration of power: Largely neglected, particularly corporate concentration. The Digital Markets Act (DMA) does not address AI-specific concentration dynamics, such as data advantages or cloud provider infrastructure.

Partial or Indirect Regulatory Engagement

Four other malicious use sub-risks are only partially addressed, often requiring complementary legislation:

  • Disinformation and persuasive AIs: Prohibits manipulative techniques and requires disclosure of deepfakes, but does not prevent personalized persuasion. The Digital Services Act (DSA) partially fills this gap.
  • Fake and abusive content: Indirectly addressed through prohibitions on exploiting vulnerabilities. However, serious forms like NCII and CSAM are not explicitly covered, with labeling obligations for deepfakes offering weak protection.
  • Fraud, scams and social engineering: Not explicitly regulated. Transparency requirements may reduce effectiveness but do not prevent these practices.
  • Cyber offence: Addressed mainly through existing EU legislation criminalizing cyberattacks. The AI Act focuses more on malicious 'abuse' (protecting systems) rather than malicious 'use' (AI-enabled attacks).

Extensive Coverage for State Surveillance

Only state surveillance and oppression is extensively covered. The AI Act bans social scoring, predictive policing, and certain biometric identification and categorization practices. This reflects the political salience of the issue in EU debates, influenced by the novelty of the risk and precedents from authoritarian regimes.

Design Limitations and Exportability Challenges

The AI Act's imbalanced risk coverage is partly intentional, designed to avoid redundancy with existing EU and national legislation. For instance, the development of bioweapons or the conduct of scams were already criminalized. Legislators aimed to address AI's role in augmenting the accessibility and impact of malicious activities, embedding safeguards to increase friction for illegal acts.

Gaps in Personal Use and Foreseeable Misuse

However, this design creates limitations. Personal, non-professional uses of AI are not covered, leaving malicious individuals to fall through gaps until caught by criminal law. While avoiding duplication, this choice is problematic as AI amplifies malicious activity, rendering criminal prosecution a weak deterrent. Additionally, the Act's reliance on 'reasonably foreseeable misuse' for risk management obligations introduces vagueness, weakening enforcement consistency. Companies like OpenAI have leveraged this ambiguity, arguing that self-harm via ChatGPT is outside their responsibility as 'unforeseeable use'.

Impact on Global Influence

These design choices and limitations undermine the AI Act's value as a global model. The assumption that other countries would import the EU's entire regulatory ecosystem, including its domestic and sectoral laws, is unrealistic. The Act's safeguards, in some cases, are also weak relative to the threat; for example, persuasive chatbots require labeling, but recent incidents demonstrate that this does not effectively mitigate their persuasive potential. These enforcement gaps allow malicious use to flourish, compromising both the Act's protective function within the EU and its global credibility.

Key Takeaways

  • The EU AI Act exhibits significant, uneven gaps in covering malicious use risks, particularly for bioweapons, rogue AIs, autonomous weapons, and corporate power concentration.
  • Its reliance on existing domestic and sectoral laws, while internally coherent, limits its exportability as a standalone global regulatory model.
  • The exclusion of personal use and the ambiguous definition of 'reasonably foreseeable misuse' create critical enforcement gaps that malicious actors can exploit.
  • Recent initiatives like the Digital Omnibus, delaying safeguards and reducing coverage, further undermine the Act's effectiveness and global perception.
  • For the AI Act to achieve its intended global influence, the EU must strengthen its regulatory coverage and adopt a more nuanced, open approach in international AI governance dialogues.

What Comes Next

The EU AI Act represents a foundational step in AI governance, yet its global influence will be constrained by both implementation hurdles and inherent design limitations. Policymakers must proactively address these shortcomings. Three complementary policy options emerge for EU policymakers seeking to bolster the Act's global standing. Firstly, a re-examination of the AI Act through alternative risk conceptualizations, beyond malicious use, could identify further gaps and avenues for improvement. Secondly, leveraging the foreseen, periodic revisions under Article 112 to amend the Act, particularly by modifying the list of high-risk AI systems in Annex III via less resource-intensive Delegated Acts, could swiftly cover unattended gaps like personalized persuasion. Lastly, the EU must engage in international dialogue with a renewed narrative, honestly acknowledging that the AI Act, in its current form, is not a plug-and-play global blueprint but rather a foundation for conversation. This approach, embracing an open and learning posture, will allow the EU to refine its own framework based on diverse international perspectives, ensuring its efforts for AI governance are well-placed to avert future harms, rather than merely replicating an incomplete model abroad. The proposed Digital Omnibus, delaying safeguards and reducing coverage, signals a concerning trend that risks reputational damage and undermines the Act's crucial litmus test as a global model before its full provisions even come into force. The ongoing negotiations regarding these amendments, without clear direction, introduce legal uncertainty and further delay the Act's ability to demonstrate its effectiveness. The ultimate success of the AI Act, both domestically and internationally, hinges on the EU's willingness to adapt and strengthen its framework in response to the evolving landscape of AI risks and global competitive pressures.

1.

The EU AI Act, intended as a global blueprint, unevenly covers malicious AI use risks.

2.

Significant gaps exist for bioweapons, rogue AIs, autonomous weapons, and corporate power concentration.

3.

The Act's reliance on existing laws and exclusion of personal use create enforcement vulnerabilities.

4.

Recent Digital Omnibus proposals threaten to delay safeguards and reduce the Act's coverage.

5.

For global influence, the EU must strengthen the Act's provisions and revise its international engagement strategy.

Focus: EU AI Act malicious use