The OWASP Top 10 for AI Agents Is Here. It's Not Enough.

The OWASP Top 10 for AI Agents Is Here. It's Not Enough.

In December 2025, OWASP released the Top 10 for Agentic Applications 2026 — the first security framework dedicated to autonomous AI agents. Over 100 researchers and practitioners contributed. NIST, the European Commission, and the Alan Turing Institute reviewed it. Palo Alto Networks, Microsoft, and AWS endorsed it.

It’s a solid taxonomy. It gives the industry a shared language for a new class of threats. And it is nowhere near mature enough for what’s already happening in production.

Let me explain.

What the Framework Gets Right

Credit where it’s due. The OWASP Agentic Top 10 correctly identifies the fundamental shift: a chatbot answers questions, an agent executes tasks. That distinction changes the entire threat model. When you give an AI system the ability to call APIs, access databases, send emails, and execute code, you’ve created something with real operational authority. A compromised chatbot hallucrinates. A compromised agent exfiltrates data, manipulates records, or sabotages infrastructure — at machine speed, with legitimate credentials.

The ten risk categories — from ASI01 (Agent Goal Hijack) through ASI10 (Rogue Agents) — capture real threats that are already showing up in the wild:

ID Risk Translation
ASI01Agent Goal HijackYour agent now works for the attacker
ASI02Tool Misuse & ExploitationLegitimate tools used destructively
ASI03Identity & Privilege AbuseAgent inherits god-mode credentials
ASI04Supply Chain VulnerabilitiesPoisoned MCP servers, plugins, models
ASI05Unexpected Code ExecutionYour agent just ran a reverse shell
ASI06Memory & Context PoisoningLong-term memory becomes a sleeper cell
ASI07Insecure Inter-Agent CommsAgent-in-the-middle attacks
ASI08Cascading FailuresOne bad tool call nukes everything
ASI09Human-Agent Trust ExploitationAgent social-engineers the human
ASI10Rogue AgentsAgent goes off-script autonomously

The framework also introduces two principles that should be tattooed on every architect’s forehead: Least-Agency (don’t give agents more autonomy than the task requires) and Strong Observability (log everything the agent does, decides, and touches).

Good principles. Now let’s talk about why principles aren’t enough.

The Maturity Problem

The OWASP Agentic Top 10 is a taxonomy, not a defence framework. It names threats. It describes mitigations at a high level. But it leaves the hard engineering problems unsolved — and in some cases, unacknowledged.

1. The Attacks Are Already Here. The Defences Are Not.

The framework dropped in December 2025. By then, every major risk category already had real-world incidents:

  • ASI01 (Goal Hijack): Koi Security found an npm package live for two years with embedded prompt injection strings designed to convince AI-based security scanners the code was legitimate. Attackers are already weaponising natural language as an attack vector against autonomous tools.
  • ASI02 (Tool Misuse): Amazon Q’s VS Code extension was compromised with destructive instructionsaws s3 rm, aws ec2 terminate-instances, aws iam delete-user — combined with flags that disabled confirmation prompts (--trust-all-tools --no-interactive). Nearly a million developers had the extension installed. The agent wasn’t escaping a sandbox. There was no sandbox.
  • ASI04 (Supply Chain): The first malicious MCP server was found on npm, impersonating Postmark’s email service and BCC’ing every message to an attacker. A month later, another MCP package shipped with dual reverse shells — 86,000 downloads, zero visible dependencies.
  • ASI05 (Code Execution): Anthropic’s own Claude Desktop extensions had three RCE vulnerabilities in the Chrome, iMessage, and Apple Notes connectors (CVSS 8.9). Ask Claude “Where can I play paddle in Brooklyn?” and an attacker-controlled web page in the search results could trigger arbitrary code execution with full system privileges.
  • ASI06 (Memory Poisoning): Researchers demonstrated how persistent instructions could be embedded in an agent’s context that influenced all subsequent interactions — even across sessions. The agent looked normal. It behaved normally most of the time. But it had been quietly reprogrammed weeks earlier.

The framework describes these threats. It does not provide testable, enforceable controls for any of them. “Implement input validation” is not a control when the input is natural language and the attack surface is every document, email, and web page the agent reads.

2. It Doesn’t Address the Governance Gap

Here’s the uncomfortable truth, stated clearly by Modulos: “The same enterprise that would never ship a customer-facing application without security review is deploying autonomous agents that can execute code, access sensitive data, and make decisions. No formal risk assessment. No mapped controls. No documented mitigations. No monitoring for anomalous behaviour.”

A risk taxonomy is only useful if it’s operationalised. The OWASP Agentic Top 10 gives security teams vocabulary but not workflow. There’s no:

  • Maturity model for agentic security posture
  • Reference architecture for secure agent deployment
  • Compliance mapping to existing frameworks (EU AI Act, ISO 42001, SOC 2)
  • Standardised scoring or severity rating for agent-specific risks
  • Testable benchmark to validate whether mitigations actually work

Security teams are left to figure out the implementation themselves, which is exactly how “deploy first, secure later” happens.

3. The LLM Top 10 Was Insufficient. This Is Still Catching Up.

NeuralTrust put it bluntly in their deep dive: “The existing OWASP Top 10 for LLM Applications is insufficient. An agent’s ability to chain actions and operate autonomously means a minor vulnerability, such as a simple prompt injection, can quickly cascade into a system-wide compromise, data exfiltration, or financial loss.”

The Agentic Top 10 was created because the LLM Top 10 didn’t cover agent-specific risks. But the Agentic list itself was created from survey data and expert input — not from a systematic threat modelling exercise against production agent architectures. As Entro Security noted: “Agents mostly amplify existing vulnerabilities — not creating entirely new ones.”

If agents amplify existing vulnerabilities, then a Top 10 list that doesn’t deeply integrate with existing identity management, secret management, and access control frameworks is leaving the most exploitable gaps unaddressed.

4. Non-Human Identity Is the Real Battleground

The OWASP NHI (Non-Human Identity) Top 10 maps directly to the Agentic Top 10. Every meaningful agent runs on API keys, OAuth tokens, service accounts, and PATs. When those identities are over-privileged, invisible, or exposed, the theoretical risks become real incidents.

Look at the list through an identity lens:

  • Goal Hijack (ASI01) matters because the agent already holds powerful credentials
  • Tool Misuse (ASI02) matters because tools are wired to cloud and SaaS permissions
  • Identity Abuse (ASI03) is literally about agent sessions, tokens, and roles
  • Memory Poisoning (ASI06) becomes critical when memory contains secrets and tokens
  • Cascading Failures (ASI08) amplify because the same NHI is reused across multiple agents

You cannot secure AI agents without securing the non-human identities that power them. The Agentic Top 10 acknowledges this. It does not solve it.

5. Where’s the Red Team Playbook?

NeuralTrust’s analysis makes a critical point: “Traditional penetration testing is insufficient. Security teams must conduct periodic tests that simulate complex, multi-step attacks.”

The framework mentions red teaming in passing. It doesn’t provide:

  • Attack scenarios mapped to each ASI category
  • Testing methodologies for multi-agent systems
  • Metrics for measuring resilience against agent-specific threats
  • A CTF-style reference application for practising agentic attacks (OWASP’s FinBot exists but is separate from the Top 10 itself)

For a framework targeting autonomous systems, the absence of a structured offensive testing methodology is a significant gap.

What Needs to Happen Next

The OWASP Agentic Top 10 is version 1.0. Like the original OWASP Web Top 10 in 2004, it’s a starting point, not a destination. Here’s what the next iteration needs:

  1. Enforceable controls, not just principles. Each ASI category needs prescriptive, testable controls with pass/fail criteria. “Implement least privilege” is not a control. “Agent credentials must be session-scoped with a maximum TTL of 1 hour and automatic revocation on task completion” is a control.
  2. Reference architectures. Show me what a secure agentic deployment looks like. Network topology. Identity flow. Tool sandboxing. Kill switch mechanism. Not theory — diagrams and code.
  3. Integration with existing compliance. Map ASI categories to ISO 42001, NIST AI RMF, EU AI Act Article 9, SOC 2 Trust Service Criteria. Security teams need to plug this into their existing GRC workflows, not run a parallel process.
  4. Offensive testing methodology. A structured red team playbook with attack trees for each ASI category, severity scoring, and reproducible test cases. The framework needs teeth.
  5. Incident data. Start collecting and publishing anonymised incident data. The web Top 10 evolved because we had breach data showing which vulnerabilities were actually exploited at scale. The agentic space needs the same feedback loop.

The Bottom Line

The OWASP Top 10 for Agentic Applications 2026 is a necessary first step. It gives us vocabulary. It draws attention to a real and growing threat surface. The 100+ contributors did meaningful work.

But let’s not confuse naming a problem with solving it. The agents are already in production. The attacks are already happening. And the governance, tooling, and testing infrastructure needed to secure these systems is lagging badly behind.

The original OWASP Top 10 took years and multiple iterations to become the authoritative reference it is today. The agentic equivalent doesn’t have years. The attack surface is expanding at the speed of npm install.

Name the risks. Good. Now build the defences.


Sources & Further Reading:

Popular posts from this blog

PHP Source Code Chunks of Insanity (Delete Post Pages) Part 4

The Hackers Guide To Dismantling IPhone (Part 3)

MSSQL Injection OPENROWSET Side Channel