Singapore has introduced the world's first dedicated governance framework for agentic AI systems, setting a new benchmark for managing autonomous AI. This non-binding framework outlines critical best practices for addressing the unique risks of agentic AI, from unauthorized actions to systemic disruptions. It signals a proactive regulatory stance in the rapidly evolving AI landscape.
On 22 January 2026, Singapore unveiled the Model AI Governance Framework for Agentic AI (MGF) at the World Economic Forum, marking a pivotal moment in global AI governance. This initiative reinforces Singapore's commitment to proactive regulation, establishing the world's inaugural dedicated governance model for agentic AI systems. These systems are defined by their capacity for independent reasoning, planning, and execution of tasks on behalf of humans, presenting novel challenges for oversight and accountability.
While the MGF does not impose binding legal obligations, its release provides a clear indication of Singapore's regulatory trajectory. It establishes practical best practices for industry adoption, building upon previous initiatives such as the 2019 Model AI Governance Framework and the AI Verify testing framework. The MGF's distinct focus lies in mitigating the unique risks posed by increasingly autonomous AI tools, including unauthorised actions, data misuse, biased decision-making, and systemic disruptions.
Understanding Agentic AI and Its Autonomous Capabilities
Agentic AI systems distinguish themselves from other AI forms, such as generative AI, through their capacity to plan, reason, and act across multiple steps with minimal human intervention. Unlike generative models that primarily produce outputs in response to prompts, agentic AI can initiate actions, adapt to new information, and interact autonomously with other agents or systems to complete complex tasks.
At the core of many agentic systems are advanced language models that serve as the central processing unit. These models interpret natural language instructions, formulate strategies to achieve defined goals, and subsequently activate connected tools. Such tools may include calculators, calendars, or application programming interfaces (APIs), enabling agents to perform diverse tasks like updating records, processing payments, or controlling devices.
Deterministic vs. Non-Deterministic Agentic Systems
Agentic systems can be broadly categorized as either deterministic or non-deterministic. Deterministic systems produce consistent outputs for identical inputs, offering a degree of predictability. Conversely, non-deterministic systems yield varied results even with the same input, introducing a higher level of unpredictability. This variability necessitates more robust oversight and governance mechanisms. In practice, agentic AI often involves multiple agents collaborating in parallel, each specializing in a particular task. While this enhances efficiency, it also compounds risks if errors cascade across the interconnected system.
Mitigating Unique Risks in Agentic AI Deployment
The autonomous nature of agentic AI introduces several unique risk categories that organizations must proactively address. While risks such as hallucination and bias are inherent to many AI systems, their impact can be significantly amplified in an agentic context, as errors may replicate across multiple outputs and processes. The MGF identifies five critical categories of risk:
- Erroneous actions: Agents may perform incorrect tasks, such as mismanaging inventory or generating flawed code, leading to substantial financial or operational consequences.
- Unauthorised actions: Agents might operate outside their permitted scope, executing transactions or making changes without necessary human approval, thereby bypassing established controls.
- Biased or unfair actions: Decisions made by agents can result in discriminatory outcomes, manifesting in areas like biased hiring practices or unfair vendor selection.
- Data breaches: Sensitive information, including personal data or trade secrets, faces heightened exposure or misuse if agents fail to recognize confidentiality or are exploited by malicious actors.
- Disruption to connected systems: Malfunctions or compromises within an agentic system can destabilize linked platforms, potentially leading to critical codebase deletions or overwhelming external systems with excessive requests.
The MGF's Four Dimensions for Responsible Governance
Singapore's MGF provides a structured approach to managing these risks through four key dimensions, emphasizing a comprehensive lifecycle perspective.
Dimension 1: Assess and Bound Risks Early
Given the adaptive and action-oriented nature of agentic AI systems, early risk assessment is paramount. Organizations must evaluate the appropriateness of any proposed use case before deployment, considering both the impact (severity of potential failure) and likelihood (probability of error). Key factors for consideration include:
- The tolerance for error within the specific domain; for instance, healthcare typically has a lower tolerance than marketing.
- Whether the agent will access sensitive or confidential data.
- The extent of its access to external systems and APIs.
- The agent's capabilities, differentiating between read-only and modification access.
- The reversibility of actions taken by the agent.
- The level of autonomy granted, as higher autonomy increases unpredictability.
- The complexity of the task, where multi-step reasoning elevates the chance of mistakes.
- Exposure to external systems, which heightens vulnerability to cyberattacks or malicious prompts.
Once a suitable use case is identified, risks must be bounded through design. This involves limiting agents to the minimum necessary tools and data, enforcing standard operating procedures, and creating mechanisms to disable malfunctioning agents. Organizations should also implement agent identity management, assigning unique identities linked to accountable human supervisors, ensuring permissions do not exceed those of the human user.
Dimension 2: Human Accountability Across the Organization
The MGF unequivocally states that ultimate responsibility for agentic AI lies with the organizations and individuals overseeing its deployment. Accountability must be distributed across various teams:
- Leadership: Board members and department heads are responsible for defining goals, permitted use cases, and overarching governance approaches.
- Product teams: These teams manage design, rigorous testing, and user education for agentic systems.
- Cybersecurity teams: They protect systems against threats and manage incident response protocols.
- End-users of outputs: These individuals ensure ethical use and compliance with organizational policies.
Externally, accountability should be reinforced through robust contractual provisions with system providers, model developers, and tool vendors. To ensure meaningful oversight, organizations should establish checkpoints requiring human approval before sensitive or irreversible actions, such as payments or critical data deletions. Oversight mechanisms should be regularly audited, accompanied by training to help humans recognize common failure modes and avoid automation bias.
Dimension 3: Technical Controls Across the Lifecycle
Technical safeguards are indispensable and must be embedded at every stage of the agentic AI lifecycle:
- Development: Agents should be prompted to clarify instructions, restricted to essential tools, and designed with standardized communication protocols. The use of sandbox environments and whitelisted servers is crucial to minimize risk during development.
- Pre-deployment: Agents must undergo rigorous testing for accuracy, compliance, tool usage, and resilience, particularly in edge cases. This testing should encompass both individual agents and multi-agent workflows within realistic environments.
- Deployment and beyond: Agents should be rolled out gradually, commencing with lower-risk contexts and trained users. Continuous monitoring, comprehensive logging, and failsafe mechanisms are essential to detect and respond to unexpected behavior in real-time.
Dimension 4: End-User Responsibility and Transparency
Empowering end-users to interact responsibly with agentic AI is a cornerstone of the MGF. Transparency is critical; users must be informed when they are engaging with an agent, what actions it can perform, what data it can access, and how to escalate issues. For external users, such as customers, clear communication about limitations and human escalation points is essential.
For internal users, including employees integrating agents into workflows, comprehensive training should cover best practices, effective oversight techniques, and common pitfalls. As agents increasingly automate routine tasks, organizations must ensure employees continue to develop and maintain core skills through ongoing training and exposure to diverse challenges.
Key Takeaways
- The Model AI Governance Framework for Agentic AI (MGF) is the world's first dedicated governance model for highly autonomous AI systems.
- It addresses unique risks of agentic AI, such as unauthorised actions, data breaches, and systemic disruptions, which are amplified by their autonomous nature.
- The MGF outlines four dimensions: early risk assessment and bounding, human accountability, technical controls across the lifecycle, and end-user responsibility.
- Organizations are encouraged to implement agent identity management and establish human approval checkpoints for sensitive actions.
- While non-binding, the MGF provides critical best practices and signals Singapore's future regulatory direction for agentic AI.
What Comes Next
The MGF sets clear parameters for the responsible use of agentic AI, offering organizations practical guidance to build trust in the deployment of advanced AI technologies. Its non-binding nature allows for flexibility, yet its comprehensive scope establishes a significant benchmark. Businesses should immediately begin reviewing their existing governance structures against the framework's four dimensions, proactively addressing any identified gaps and strengthening oversight mechanisms. As the MGF is designed to evolve alongside technological advancements, continuous engagement with future updates and broader industry developments will be crucial. This proactive approach will not only ensure compliance with emerging best practices but also foster innovation within a secure and accountable framework, shaping the future of autonomous AI deployment globally.
Key Highlights
Singapore launches the world's first dedicated governance framework for agentic AI systems.
The MGF addresses unique risks like unauthorized actions, data breaches, and systemic disruptions.
Framework emphasizes early risk assessment, human accountability, and technical controls.
Transparency and end-user responsibility are critical for safe agentic AI deployment.
MGF provides non-binding best practices, signaling future regulatory trends.


