The Assumptions Zero Trust Was Built On

Zero trust architecture, as codified by NIST SP 800-207, rests on a set of assumptions about who is requesting access. The entity has a device with a posture that can be evaluated. The entity connects from a network location that provides contextual signal. The entity authenticates through credentials tied to a human identity. The entity exhibits behavioral patterns that can be baselined and monitored.

These assumptions were reasonable when the only entities accessing enterprise systems were humans sitting at computers. They are catastrophically wrong when applied to AI agents.

An AI agent does not have a device. It runs as a process — potentially across multiple containers, regions, and cloud providers simultaneously. An AI agent does not have a location. It originates from wherever its compute is scheduled, which might change between requests. An AI agent does not exhibit predictable behavior, because its actions are generated by a language model that is, by definition, non-deterministic.

Zero trust said "never trust, always verify." For AI agents, we need something stronger: never trust, always verify, continuously constrain, and validate every output.

Why "Agent as Service Account" Is a Catastrophic Pattern

The most common pattern emerging in enterprise AI deployments is treating the agent like a service account. Give it a set of credentials. Assign it a role with the permissions it needs. Let it operate. This is exactly how you would provision a microservice or a batch job.

It is also the most dangerous security pattern in enterprise computing today.

A service account runs deterministic code. You can audit the code, verify it does what it claims, and predict its behavior under all input conditions. An AI agent runs non-deterministic inference. Its behavior depends on a prompt, a context window, and a model whose internals are opaque even to its creators. You cannot audit it the same way. You cannot predict what it will do with a novel input.

When you give a service account access to your HRMS, you know it will execute the three API calls in its code. When you give an AI agent access to your HRMS with the same credentials, it can do anything those credentials allow — including actions no one anticipated when the permissions were granted.

TRADITIONAL ZERO TRUST vs. AGENT ZERO TRUST HUMAN ZERO TRUST User Device Posture Check Identity + MFA Verification Network Location Context Role-Based Access Grant SESSION GRANTED AGENT ZERO TRUST Agent Ephemeral Credential Issuance Per-Action Authorization Context-Bound Token Scoping Output Validation Gate Real-Time Anomaly Detection SINGLE ACTION GRANTED

Fig 1 — Human zero trust grants sessions. Agent zero trust authorizes individual actions.

Five Pillars of Agent Zero Trust

1. Ephemeral Credentials

No agent should possess a long-lived credential. Every agent session starts with a credential that expires in minutes, not hours or days. The credential is cryptographically bound to the specific task the agent was invoked to perform. When the task completes — or the credential expires — the agent has zero access. No residual permissions. No token to steal.

This is fundamentally different from how service accounts work. A service account credential might be rotated every 90 days. An agent credential should live for 90 seconds.

2. Per-Action Authorization

Role-based access control grants a set of permissions for a session. The agent gets "HR Manager" role and can do everything an HR Manager can do. Per-action authorization evaluates every individual action against policy before execution. The agent wants to read an employee record? Authorized. The agent wants to modify salary data? That requires a separate authorization decision, evaluated in real time against the current policy and context.

This is computationally expensive. It adds latency to every agent action. It is also the only model that is safe for non-deterministic actors.

3. Context-Bound Tokens

A context-bound token carries not just identity and permissions, but the specific context in which it is valid. The token encodes: which task the agent is performing, which data entities it is operating on, which actions it is permitted to take on those entities, and what the maximum blast radius of any action can be.

If the agent attempts to use the token outside its bound context — accessing a different employee record, performing a different type of action — the token is invalid. Not expired. Invalid. The system does not need to evaluate whether the action is allowed. The token itself prevents it.

4. Output Validation

Every action an AI agent takes produces an output — a database write, an API call, a message, a decision. Before any output reaches its destination, it passes through a validation gate. The gate checks: Does this output conform to the schema of what was expected? Does it contain data the agent was not authorized to access? Does it attempt to escalate permissions or modify access controls? Does it contain personally identifiable information that should not leave the current context?

Output validation is the last line of defense against prompt injection, hallucination-driven actions, and model-level exploits.

5. Real-Time Anomaly Detection

Human behavior can be baselined over weeks and months. An agent's behavior must be baselined in real time, against the specific task it was invoked to perform. If an agent tasked with generating a weekly sales report starts querying employee compensation data, that is an anomaly — regardless of whether its credentials technically allow the access.

Anomaly detection for agents must operate at the action level, not the session level. Every action is evaluated against the expected action sequence for the task. Deviation triggers immediate investigation and potential credential revocation.

The Agent Threat Model

Traditional security threat models focus on external attackers and insider threats. Agent zero trust must account for a new category: the agent itself as an unintentional threat actor.

AGENT THREAT MODEL Threat Vector Risk Mitigation Prompt Injection Agent executes attacker instructions Output validation + context binding Credential Theft Stolen token grants broad access Ephemeral creds (90s TTL) Privilege Escalation Agent acquires permissions beyond scope Per-action authorization Data Exfiltration Agent leaks sensitive data via outputs Output validation gate Action Chaining Benign actions combine into harmful sequence Real-time anomaly detection Model Manipulation Adversarial inputs alter agent behavior Context-bound tokens + audit Hallucination Actions Agent acts on fabricated information Schema validation + human-in-loop

Fig 2 — The agent threat model requires mitigations at every layer, not just at the perimeter.

The Action Chain Problem

The most insidious threat in the agent security model is action chaining. Each individual action an agent takes might be perfectly authorized. Read a customer record — authorized. Read a pricing table — authorized. Send an email — authorized. But the combination — read customer data, look up their contract terms, and email a competitor with the details — is a catastrophic data breach.

Traditional security models evaluate actions in isolation. Agent zero trust must evaluate action sequences. This requires maintaining a real-time action graph for every active agent, tracking not just what it did, but what it might do next given what it now knows.

This is computationally expensive. It requires a policy engine that understands not just permissions but intent. And it cannot be bolted onto existing security infrastructure. It must be built into the control plane that governs agent execution.

Why the Control Plane Is the Right Place for Agent Security

Agent zero trust cannot be implemented at the application level. An agent that operates across CRM, HRMS, and finance cannot be governed by three separate security models with three separate policy engines. The security decisions must happen at the layer that sees all of the agent's actions across all systems — the control plane.

This is why Own360's architecture treats agent governance as a first-class control plane function. Every agent action, across every application, passes through a single authorization engine. Credentials are issued by the control plane and scoped to specific tasks. Output validation happens before any action reaches the target application. The audit trail captures not just what happened, but the full decision chain — which model generated the action, what context it had, what policy was applied, and what the validation result was.

You cannot secure AI agents by adding another layer of middleware. You secure them by making the security model native to the infrastructure they run on.

What Needs to Change

If your organization is deploying AI agents — or plans to — your security architecture needs a fundamental rewrite. Not a patch. Not an additional tool. A new model.

Stop treating agents as service accounts. Stop granting session-level access. Stop assuming you can predict what a non-deterministic system will do with the permissions you gave it. Start thinking in terms of ephemeral credentials, per-action authorization, and continuous output validation.

The organizations that get this right will deploy agents with confidence. The organizations that don't will discover, painfully, that their zero trust architecture has a very large hole in it shaped exactly like an AI agent.

See agent governance in action

Own360's control plane implements per-action authorization, ephemeral credentials, and real-time output validation for every AI agent across 19 enterprise applications.

See it live →