Threat Talks Revisited: What We Got Right (and Wrong) About 2025

Reading Time: 7 minutes

Category: Trends and Reports

Summary

When we kicked off 2025, we put a few bold predictions on the table. Not the vague kind. The kind that shaped every conversation we had on the podcast: AI would get real, Zero Trust would spill into supply chains, the skills gap would stretch even further, regulation would hit the brakes, and AI’s power use would stop being a footnote and start being a problem.

Some trends moved faster than expected. Some came with more friction than anyone wanted to admit. And a few delivered surprises that even the most caffeinated security teams didn’t see coming.

This review is not about patting ourselves on the back (though we are proud to say we got a lot of these right). It is about giving you a clear snapshot of how the landscape actually moved, what surprised the industry, and where your security strategy may need to adjust next.

To keep ourselves honest, let’s start with what we got wrong.

What we missed

Even strong predictions miss the curve. What didn’t we get quite right?

We underestimated, like many, how fast attackers would build their own AI kits. They didn’t need labs or budgets. They needed an open-source model and a Telegram channel. AI flattened the skill curve and gave small-time criminals big-time capabilities.

Generally speaking, attackers moved faster than anyone forecasted. That speed gap is now the biggest risk on the board.

Lastly, while we did accurately predict deepfakes and AI-driven impersonations would become an issue, we didn’t quite grasp the scale or the speed at which these were adopted.

What looked like an emerging threat turned into an everyday weapon. Voice cloning went from “impressive demo” to “instant service.” CFOs were getting calls from a “CEO” that sounded so convincing even close colleagues wouldn’t question it. The realism jumped faster than the safeguards, and the gap became obvious.

Before we get into the predictions

We got some things wrong, but in all fairness, we got most predictions right. The trends we called out shaped most of the year. AI, regulation, and supply chain trust defined the headlines, the strategy meetings, and the sleepless nights.

With that confession out of the way, let’s walk through the predictions themselves and see where the calls held up and where reality added a plot twist.

Prediction 1: AI would take over cybersecurity

What we said
AI was going to move from hype to reality. Attackers would use it to build smarter scams. Defenders would use it to close the gap. Cybersecurity would feel less like whack-a-mole and more like strategy.

What happened
2025 made AI impossible to ignore. Everything turned “AI powered,” and attackers wasted no time proving what that really meant. Phishing emails became almost perfect. Recon sped up. Break-ins got faster.

Then the next step arrived: real AI-driven malware. PromptLock was first to hit the spotlight, adapting to defenses in real time and showing how quickly offensive AI could evolve.

At the same time, AI model and app stores took off with almost no guardrails. Easy access meant poisoned models and hidden backdoors spreading before most teams understood the risk. Cases include the Postmark MCP backdoor that copied emails, Shadow Escape that stole data from a hidden prompt, and kubectl mistakes that wiped servers.

Defenders countered with AI copilots that sped up triage and detection, but the tools also created new problems. Hallucinated alerts. Confusing recommendations. Dashboards full of noise instead of clarity.

Verdict
We got the direction right. AI did change security. But the impact was messier, faster, and riskier than expected. The promise was real, but so was the pain.

Prediction 2: Zero Trust would stretch into supply chains

What we said
Zero Trust would break out of the network perimeter and hit vendors, manufacturing lines, and even national infrastructure. The supply chain would become the new trust boundary. We called it the “Zero Trust supply chain.”

What happened
That shift became real. Governments banned untrusted hardware. Vendors stopped being partners by default and started to look more like potential threats. Every external dependency turned into an attack surface. IoT systems and operational technology stayed soft spots that attackers kept testing.

Even big enterprise tools were not safe. The breach at Salesloft, a CRM vendor, reminded companies that third party platforms can become the weak link in an otherwise hardened environment.

Verdict
A clean hit. Supply chain trust jumped from IT chatter to boardroom priority.

Prediction 3: The cyber skills gap would get worse

What we said
The skill gap would keep growing. Technology moves faster than people can learn.

What happened
It accelerated.

That prediction played out clearly. AI stepped in to handle repetitive tasks, but it did not reduce the need for skilled people. It raised the bar. Organizations needed analysts who could interpret AI output, understand context, and make judgment calls that automated systems could not. Teams started tracking “time to competency” as a core metric, just like Peter predicted.

The rise of agentic AI created an even sharper divide. These systems promised autonomy and speed, but they introduced new risks. Without strong oversight they could take actions that created more problems than they solved.

Verdict
Three for three. The skills gap didn’t close. It evolved into something more complex.

Prediction 4: AI regulation would slow things down

What we said
The EU’s AI Act would create friction. The United States would move faster. Europe would get tangled in compliance while the rest of the world kept sprinting.

What happened
That’s pretty much how it played out. The AI Act launched and triggered confusion across industries. Deployments paused. Legal teams got very busy. Apple even disabled features in Europe. That set the tone.

Cross-border work became a compliance puzzle instead of a technical one.

Regulation mattered. It brought guardrails and safety. But it also slowed momentum and made global rollouts far more complicated. The tension between innovation speed and regulatory rigour became painfully real. Companies needed to figure out how to play it safe with AI.

Verdict
Right on target. Governance increased safety but reduced speed.

Prediction 5: AI power use would become a real problem

What we said
Running giant AI models would guzzle power. Energy would become a cyber issue, not just a utilities problem.

What happened
Data centers hit record energy use. Hyperscalers pushed sustainability initiatives faster than ever. Regulators demanded evidence that digital growth wouldn’t turn the grid into a very expensive stress test.

AI shifted from a tech story to an energy story. Then to a climate story. Then to a political story. It’s now impossible to talk cybersecurity without talking about electricity.

Verdict
Spot on. Power is now part of every cybersecurity conversation.

Conclusion

Looking back, the predictions held up, but the value is not in being right. It is in understanding how these shifts actually shaped the year for security teams. AI, regulation, supply chain risk, and the widening skills gap changed the way organizations planned, staffed, and defended their environments.

The real story of 2025 was the pace of change. AI sped up everything. Attacks became faster. Decision cycles shrank. Compliance pressure grew. Outages hit harder. Teams that could adjust their strategy in real time stayed ahead. Teams that waited for stability felt the impact quickly.

That is why these retrospectives matter. They show where the industry moved, where expectations collided with reality, and where customers may need to adjust their own security posture in the year ahead.

Now the question for 2026 is simple: who can adapt faster, the hackers or the humans?