← All posts
The AI Agent Identity Crisis: Why Your Security Model Is Already Broken

The AI Agent Identity Crisis: Why Your Security Model Is Already Broken

2026-02-02 8 min read
ai-securityagentic-aiidentity-managementshadow-aiiam

Here's something that keeps me up at night. We've spent decades building identity and access management systems. Least privilege. Zero trust. All that good stuff. And now we're just... handing AI agents the keys to everything?

I'm not exaggerating. A recent scan at one company found 17 AI agents per employee. Seventeen. Most of the security team had no idea they existed.

What Are AI Agent Identities, Anyway?

Let's back up. When someone spins up an AI agent, that agent needs credentials. OAuth tokens. API keys. Repository access. Whatever it takes to actually do work. These are AI agent identities, and they're multiplying fast.

The problem? Nobody's governing them.

"We're letting things happen right now that we would have never let happen with our human employees," says Shahar Tal, CEO of Cyata. "We're letting thousands of interns run around in our production environment, and then we give them the keys to the kingdom."

That's not hyperbole. It's Tuesday.

Why Traditional IAM Can't Handle This

Your identity management tools were built for humans. Predictable roles. Nine to five access patterns. Maybe some service accounts doing scheduled tasks.

AI agents break all of that.

They run 24/7. They adapt their behavior on the fly. They chain together access across multiple systems in ways nobody anticipated. Teleport CEO Ev Kontsevoy puts it bluntly: "AI agents are not human, but they also do not behave like service accounts or scripts."

So what are they? Good question. Nobody really knows yet. And that's the problem.

Traditional PAM and IAM assume identity is either human or machine. But agents exist in this weird middle ground. They're autonomous. Non-deterministic. They "want to please," as one CTO put it recently, which means they can be manipulated in ways that static service accounts can't.

The Numbers Are Scary

Gartner says 40% of enterprise apps will integrate with AI agents by the end of 2026. Up from 5% in 2025. That's insane growth.

Meanwhile, machine identities already outnumber human identities 82 to 1. And most organizations can't tell you which of those are AI agents versus regular service accounts.

I've talked to security teams who ran their first discovery scan and just... stared at the screen. Thousands of agent identities they didn't know existed. Some created by well-meaning employees trying to automate their workflows. Some connected to every data source in the company.

One security vendor told me they find "anywhere from one agent per employee to 17 per employee" when they scan customer environments. Engineering and R&D teams adopt fastest, but it's spreading everywhere.

Shadow AI Is the New Shadow IT

Remember when employees started using Dropbox and Google Docs before IT approved them? Same thing is happening with AI agents. Except the blast radius is way bigger.

Someone creates a personal ChatGPT account. Connects it to their work email. Maybe adds some MCP servers to access corporate data. Suddenly you've got an unmonitored AI agent with broad access to sensitive information.

"We are seeing a lot of shadow AI," Tal says. "Someone using a personal account for ChatGPT or Cursor or Claude Code or any of these productivity tools."

These shadow agents often get more access than they need. People create them as experiments. "Let me just connect it to everything and see what happens." Then they forget about them. Or move to a different team. Or leave the company entirely.

The agent keeps running. Still has all those tokens. Still connected to your production systems.

Real Attacks Are Already Happening

This isn't theoretical. Block found during a red team exercise that their AI coding agent could be tricked into deploying malware on employee laptops. Prompt injection. The agent wanted to be helpful, so when fed malicious instructions disguised as legitimate requests, it complied.

They fixed it. But how many companies are even looking?

Researchers have shown how AI agents with broad access can create "superuser" chains. Access this system, then that one, then another, until you've got a path to whatever you want. Exfiltrate data. Execute code. Whatever.

The agents don't know they're being manipulated. They just follow instructions. That's what they're built to do.

What Actually Works

Look, I'm not saying we should ban AI agents. They're genuinely useful. The productivity gains are real. But we need to treat their identities seriously. Here's what I think actually helps:

1. Discovery first. You can't secure what you don't know exists. Run a scan. Find out how many AI agents are actually operating in your environment. The number will probably shock you.

2. Treat agents as a separate identity class. Not human. Not traditional service account. Something new that needs its own policies and monitoring.

3. Apply least privilege aggressively. Just because an agent could access everything doesn't mean it should. Scope credentials to what's actually needed. Review regularly.

4. Monitor for anomalies. Agent behavior should be somewhat predictable based on its purpose. Big deviations from normal patterns deserve investigation.

5. Expire tokens. Short-lived credentials. Force re-authentication. Don't let agent access persist forever just because someone set it up once.

6. Classify agents by risk. A coding assistant with repo access is different from an AI that can send emails or access financial data. Not all agents need the same level of scrutiny.

The CrowdStrike Move

CrowdStrike just announced a $740 million acquisition of SGNL, an identity security company. The stated goal? Getting a grip on AI agent identities specifically.

That's a big bet. And it tells you where the industry thinks this is heading.

The companies that figure out AI agent identity management early will have a major advantage. The ones that don't? They're running blind while autonomous systems proliferate across their networks.

My Take

Honestly? This feels like 2010 again. Cloud adoption was exploding. Security teams were scrambling to adapt tools built for on-prem environments. There was this gap between what technology enabled and what security could protect.

We're in that gap right now with AI agents.

The difference is the speed. Cloud adoption took years to really hit critical mass. AI agent adoption is happening in months. Security teams don't have the luxury of slowly evolving their practices.

I think the organizations that will do best are the ones treating this as a genuine new category. Not trying to force agents into existing human or machine identity buckets. Building new frameworks from scratch.

It's uncomfortable. There's no established playbook. But waiting for one means letting ungoverned agents run loose in your environment while you figure it out.

That's a risk I wouldn't take.


Working on AI agent security? Connect on LinkedIn or check out my CV.

About the Author

Trym Håkansson is Lead of Security Operations at Crayon, specializing in MDR, incident response, and Microsoft security platforms.