73% of Security Teams Say AI Threats Are Real. Half Feel Unprepared. Now What?
I've written about the AI agent identity crisis and the Moltbook breach over the past few days. Both posts focused on the attack side: what can go wrong when AI agents run wild.
But there's another side to this story. The defender side.
And according to fresh data from Darktrace, it's not looking good.
The Numbers That Should Scare You
Darktrace just dropped their State of AI Cybersecurity report for 2026, and one finding stands out above everything else:
73% of security professionals say AI-powered threats are already having a significant impact on their organization.
Not "might have impact someday." Not "we're concerned about future risks." Already. Right now. Today.
But here's the kicker: nearly half of those same professionals feel unprepared to defend against AI-driven attacks.
Three quarters of security teams know the threat is real. Only half think they can handle it. That gap is where breaches happen.
The Readiness Gap
I call it the readiness gap. The distance between knowing something is a problem and being able to do anything about it.
In traditional security, this gap exists but it's usually manageable. You know ransomware is a threat, so you segment networks and backup data. You know phishing is a problem, so you train users and deploy email security.
But AI threats are different. They move faster. They adapt. They scale in ways that human attackers can't.
And the tools to defend against them? Still catching up.
What's Actually Changing
Let me break down why AI threats feel so different from what came before.
Speed of evolution. A traditional threat actor might take weeks to develop a new phishing campaign. An AI-assisted attacker can generate thousands of variations in hours. Test them. Learn what works. Iterate.
Personalization at scale. Spear-phishing used to be expensive. You needed someone to research the target, craft a believable message, make it feel personal. AI does this automatically. Every target gets a custom-tailored attack. No extra effort required.
Exploitation of AI systems themselves. We're not just talking about attackers using AI. We're talking about attackers targeting the AI systems you've already deployed. Prompt injection, data poisoning, model manipulation. Attack surfaces that didn't exist two years ago.
Deepfakes and synthetic media. Voice cloning is getting scary good. We've already seen cases of CFO deepfakes authorizing wire transfers. It's only going to get worse.
Why Teams Feel Unprepared
The Darktrace survey didn't just measure fear. It measured capability gaps. And the patterns are revealing.
Tool fatigue. Security teams are drowning in alerts. Adding AI-focused tools often means adding more noise. More dashboards. More things to monitor. More skills to develop. The promise of AI-powered defense is offset by the reality of yet another system to manage.
Skill gaps. Understanding traditional threats is hard enough. Understanding AI threats requires a different knowledge base. Prompt engineering. Model behavior. Training data vulnerabilities. Most security teams don't have these skills. And hiring them is expensive.
Organizational inertia. Leadership gets the headlines about AI threats but doesn't always translate that into budget and headcount. "We already have a security team" is a common response. Never mind that the threat landscape just fundamentally changed.
No playbook. When ransomware hit, we had years of incident response procedures to draw from. AI threats are newer. The playbooks don't exist yet. Teams are improvising. That's stressful.
The CISO Burden
Here's something that doesn't get discussed enough: this is exhausting for security leaders.
CISOs in 2026 are expected to understand traditional security, cloud security, application security, OT security, and now AI security. They're supposed to articulate risks to the board in business terms while also evaluating technical controls in engineering terms.
The scope keeps expanding. The budget rarely keeps pace. And the consequences of failure are personal. CISOs are the ones who get blamed when breaches happen. Even when they warned leadership. Even when they asked for resources.
Burnout in security leadership is real. The readiness gap isn't just a technical problem. It's a human one.
So What Do We Actually Do?
Alright. Enough doom and gloom. Let's talk practical steps.
Accept that you can't solve everything. Seriously. The attackers have infinite time and creativity. You have limited resources and a day job. Prioritization isn't just helpful, it's mandatory. Focus on the AI threats most relevant to your organization, not every theoretical risk.
Build detection around behavior, not signatures. AI-generated attacks change constantly. Signature-based detection will miss most of them. Focus on anomaly detection. What does normal look like? What deviates from that? This approach works whether the threat is human or AI.
Inventory your AI exposure. Do you know all the AI systems in your environment? Shadow AI is real. Employees are signing up for ChatGPT, Claude, and a dozen other services. They're connecting these services to corporate data. You need visibility into this before you can defend it.
Educate beyond phishing. Traditional security awareness training focuses on email links and attachments. That's table stakes now. Your users need to understand deepfakes, voice cloning, and AI-generated impersonation. Update your training.
Join the conversation. The AI security community is still forming. ISACs are starting to share intelligence. Vendors are building new detection capabilities. Researchers are publishing techniques. Stay connected. You can't defend against threats you don't know about.
Push for resources. That readiness gap is partly a budget gap. If your leadership understands AI is a real threat but hasn't funded your response, that's a conversation you need to have. Bring data, risk scenarios, and competitor incidents.
The Bigger Picture
I've been in this industry long enough to see cycles. New threat emerges. Panic. Tools proliferate. Market consolidates. Defenders catch up. New threat emerges.
AI security feels different because the threat and the defense are evolving simultaneously. We're building AI detection while attackers are building AI attacks. It's an arms race, and nobody knows who's winning.
But I do know this: the organizations that take it seriously now will be better positioned than those who wait. The readiness gap closes with action, not hope.
73% of security teams know AI threats are real. The question is whether yours is in the half that can actually respond.
A starting point: inventory the AI systems in your environment. Most teams I talk to are genuinely surprised by what they find.