


Back to resources
Why AI Agents Break Every Assumption You've Made About Privileged Access
April 2026 / 8 min. read /

There's a version of this conversation that security teams have been having for the last two years.
It usually sounds like: "We'll treat the agents like service accounts. We'll vault the credentials. We'll add a workflow for approval."
It's a reasonable instinct. It’s also not going to hold.
Ketan Kapadia, Britive’s Field CTO, sat down with Shushant Choudhury, an identity practitioner with nearly two decades of experience across financial services, healthcare, retail, and manufacturing, to work through exactly where that thinking breaks down and what you actually need to get this right.
Here's what came out of that conversation.
The Fallacy of the Vault-Based Approach (Again)
The most common misconception Shushant sees when organizations first apply their existing PAM tooling to AI agents is assuming that storage = security.
If a secret is stored in a centralized vault, teams assume it's protected.
But an active secret, vaulted or not, is an open risk. Utilizing a vault solves the wrong problem.
Service accounts operated on predictable, scripted behavior. You could vault the credential, monitor access, and have a reasonable degree of confidence about what that account would do.
AI agents don't work that way. They're non-deterministic and generate their own execution plans. They can traverse multiple cloud systems, SaaS platforms, and APIs in the scope of a single task.
The blast radius when something goes wrong isn't comparable to a service account. It's a different category of exposure entirely.
A Service Account is Not the Same as an Agent
This distinction matters more than it might seem.
Service accounts were built around fixed roles and static functions. They did what they were told, in the order they were told, on a predictable schedule. That made them easier to control even imperfectly.
Agents operate on intent. You give them a goal and they determine the path. They can take actions you didn't anticipate, across systems you didn't plan for, faster than any monitoring tool built for human-scale activity can respond.
Treating an agent like a service account doesn't just create a gap. It creates a gap you probably won't see until it matters.
What “First-Class Identity” Really Means Operationally
Calling agents "first-class identities" has become a huge talking point. But what does that look like in practice?
Shushant framed it as pulling agents out of the junk drawer of unmanaged service accounts and into a structured lifecycle. Not as a philosophical shift, but an operational one.
That starts with onboarding. Before an agent ever executes a task, you need to know: What's its intent? What LLM is it built on? What risk tier does it sit in? Who owns it? An agent registry answers those questions.
Think of it the way an HR system tracks a human employee, except you're tracking behavior scope, ownership, and intent instead of title and department.
From there, you need cryptographic attestation. Not just verifying that the agent is who it says it is, but that it's still behaving according to its registered intent. If the underlying model drifts or gets updated, authentication alone doesn't catch that. Authorization has to (more on that in the next section).
And when the task is done, the access goes away.
Just-in-time provisioning wasn't optional for humans, and it’s definitely not optional for agents operating at machine speed.
Authentication and Authorization are Two Different Problems
One of the cleaner frameworks that came from the conversation is part of the core of Britive’s approach to access security: decoupling authentication from authorization.
An agent can prove it's a legitimate, trusted agent and still get it wrong. If it's been manipulated, if it hallucinates, if its intent drifts, the authentication layer will still say yes.
Provisioning and enforcing authorization at runtime is what catches risks like these.
The example Shushant used is worth repeating. An agent's registered intent is to summarize an email. The agent starts attempting to delete a database. Authentication says it's a valid agent. Runtime authorization says that action doesn't match its intent and blocks its access.
Adhering to zero standing privileges is what makes this work. An agent with persistent permissions is a 24/7 attack surface.
An agent that gets exactly the access it needs for exactly the duration it needs it, and nothing more, reduces that surface to near zero.
ZSP was considered best practice when we were only applying it to humans. With agents, it's become a requirement.
Incorporate Human in the Loop Without Approval Fatigue
Every security team has lived through approval workflow fatigue. A request comes in without context, and someone clicks approve because the queue is full. The control exists on paper, but it doesn't function in practice.
Adding humans to the agent approval loop will hit the same wall if it's designed the same way. A binary approve/deny decision with no context generates rubber stamps, not informed, risk-based decisions.
What actually works is shifting approvals from a binary to contextual ones. The agent registry establishes the baseline level of access and expectations around what an agent should be doing.
When an agent requests access, the approver sees what the agent is supposed to be doing, what it's requesting, and whether that request matches its registered intent. This enables human in the loop that actually functions with security in mind.
The other thing to consider is that not everything needs a human.
A well-scoped permission model means routine tasks run automatically, and mission-critical actions like a refund, a database write, or a potentially destructive operation, get escalated for human approval.
This way, the agent works at speed where it should, while humans stay involved where necessary.
The Control Plane: A Unified Access Broker
The architectural pattern Shushant described is straightforward. An agent should never connect directly to a resource.
It sends a request to a broker like an MCP server. The broker validates the action against policy and the agent's registered intent, then executes the permission in runtime.
That broker is what prevents credential leakage. Agents can expose secrets through memory, chat logs, and tool outputs. If the agent never sees the underlying credential in the first place, that exposure vector doesn't exist.
This is also where MCP governance fits. An MCP server might expose 20 tools by default. That doesn't mean your agent needs access to all 20.
Having a broker in place controls what tools are within an agent’s reach, not just what credentials are available.
Now, Where Do You Start?
The most practical question from the webinar was the simplest one: what can you do today to secure the agents that are already in place?
You start with discovery. You cannot govern what you don't know exists. Most organizations don't have a complete picture of the agents running in their environment, who owns them, what they're accessing, or how they were built.
Building that agent registry is the foundation. Everything else, policy enforcement, runtime authorization, JIT provisioning, audit logging, depends on knowing what you're governing.
The conversation about agentic identity is moving fast. The organizations that will manage it well are the ones that treat it as an identity problem first, not a monitoring problem or a compliance checkbox.
The controls already exist. They just need to be applied at the right layer, with the right architecture, at the speed agents actually operate.

