We’ve been doing permissions wrong. Not just a little wrong — completely wrong. And for humans, it mostly didn't matter. For agents, it's going to.
I've spent the last 7 years building permissions software. For most of that time, I'd describe the experience as trying to sell people vegetables. Everyone knows they need them. No one is that excited about it. When I'd explain what Oso did to my relatives, I got polite nods and subject changes. When I'd sell to security people, they'd say "yeah, this is really important" — and then not do anything about it.
So I'll admit: it's been a little satisfying watching Aaron Levie, Scott Belsky, and the WSJ suddenly care a lot about permissions.
Everyone agrees least privilege is the goal. The principle was formalized in 1975, and in the fifty years since, we've all nodded along and done essentially nothing about it. "Violation of the principle of least privilege" has been the #1 item on the OWASP Top 10 for six straight years. Our own data shows users don't touch 96% of the permissions they have.
You know that guy at your company who's an admin in everything? He just needed one thing in HubSpot, and rather than figure out the right permission, someone shrugged and said "just make him admin." That was five years ago. He's been an admin ever since.
That's not one lazy IT decision. That's how permissions work everywhere.
Security teams knew it was broken. They just never had the leverage to fix it, and no one had built a solution that made fixing it practical. So we all reached the same implicit bargain: overpermissioning is bad, but not bad enough to justify the cost of cleaning it up. Keep the broad roles. Deal with the occasional breach. Move on.
That bargain made sense. The reason it made sense is that humans are self-limiting. We trust them more than we should — any security practitioner will tell you that — but even when humans do bad things, they're slow. They work business hours. They sleep. There are only so many files a person can exfiltrate before they have to go to bed.
The bargain just expired.
To get anything useful out of an agent, you have to do things everyone knows they shouldn't. Connect it to your real apps. Give it real power. Hand it your credentials. That's not a design flaw you can patch. That's the deal. Agents only work when they have access, and that access is exactly what makes them dangerous. A teammate of mine put it well: "Anything worth doing is lethal."
The reason agents break our traditional bargain is they don't have the two human traits that made overpermissioning tenable.
They lack judgment. In December, an AWS engineer used Kiro — AWS's own coding agent — to fix a minor software issue. Kiro determined the most efficient solution was to delete and recreate the environment. Thirteen-hour outage. AWS's agent took down AWS infrastructure. The agent didn't hesitate, didn't ask, didn't notice what it was about to do. It had no concept of consequences, only completion. If your agent causes a catastrophic outage and you end up in front of your board, the fact that the AI made the call offers you no protection.
And they're not bounded. In November 2025, Anthropic disclosed that a Chinese state-sponsored group had used an AI agent to attack roughly thirty global targets — banks, tech companies, government agencies. The agent sent thousands of requests per second with no human in the loop. The whole operation ran faster than any defender could detect it. A human attacker sleeps. An agent doesn't. When something goes wrong at machine speed, the blast radius scales with however much access the agent has.
The bargain is over. We have systems with no judgment that never stop. And we're giving them the same broad, static permissions we built for people who at least had judgment and did stop.
The industry's response so far has been to make agents do less.
The state of the art right now is impersonation: give the agent the same permissions as the user who created it. That's why Claude Code has a flag called --dangerously-skip-permissions. It's honest, at least. You're handing a tireless, fast, judgment-free system the keys to everything its user can touch.
Some people have proposed time-bound credentials — short-lived permissions that expire quickly. That's still thinking like it's 2015. The threat isn't that an agent holds permissions too long. It's that an agent can do more damage in ten minutes than a human can in a year.
Outside of coding agents, the market is mostly deploying agents that are so locked down they can't do anything useful; I call this the Impotent Agent. Everyone waiting for someone to figure out the right model before they let their agents off the leash. 83% of executives expect to increase their use of agents 8x this year. Most security teams can't say no to that pressure. The question isn't whether agents are coming. It's whether you'll have any control over what they do when they get here.
The answer has been in the name for fifty years: least privilege. Give agents only the access they need for the specific task they're doing. Scope it to the task. Revoke it when the task is done.
The problem was never the idea. The problem was that it was impractical. To do it right for humans, you'd need to understand each person's job in detail, map it against the permission models of every system they touch, grant access dynamically, revoke it when the task changes. Nobody could do that. So we took shortcuts.
With agents, the shortcuts are fatal. But the thing that makes agents dangerous is also what makes least privilege finally achievable: agents declare what they're doing. A human engineer might open an AWS secrets file for ten different reasons. An agent fixing a CSS bug has no business opening it at all. The task is explicit, which means you can compute the scope and enforce the access.
Not the principle as an aspiration, but as an implemented system. One that understands the agent's task, maps it against the permission models of every system the agent might touch, scopes access to exactly what's needed, grants it dynamically, revokes it when the task ends or something looks wrong — all in real time, on the critical path, without slowing the agent down.
You can't do this manually. Not for one agent, let alone hundreds. The automation isn't a nice feature. It's the only way this is achievable at all.
There's one more wrinkle: what counts as "normal" for an agent is different from what's normal for a human. An agent reading thousands of files a minute is expected behavior. For a human, it would set off every alarm you have. This means you can’t use existing security tooling to get usable baselines.
Maybe we’ll get there later, but in the meantime you need context. Not just "is this IP from North Korea," but organizational and system-specific context: the org chart, what people are working on, what's sensitive. You need to understand the permission models of every system. Then you can scope access. I wish we could stop here, but even this still won’t be enough. As Joe Sullivan points out, you need to assume these controls will fail, watch everything at runtime, and continuously re-evaluate.
For most of my career, the honest response to "you should fix permissions" was: yeah, probably, but not today.
Today is different. Not because the principle changed; rather, because the systems have changed.
Most agents are still in pilots. This is your window. The folks pushing to deploy agents don't yet fully grasp the system change we're undergoing, even if they intuitively have some fear. The risk is that FOMO overrides the fear.
If you're in security, this is the conversation you need to start before someone else finishes it for you. The next time your CEO wants to connect an agent to production systems, you need an answer that isn't "we turned off half the features." You need a model where agents can do real work without being a liability. That model is automated least privilege.
The window is small, and it's closing. Don't be the team that saw it coming and waited anyway.
Thanks to Boris Tane for reading drafts and making this better.

.png)
