Everyone's worried that AI's newest models are a hacker's dream weapon
· Axios

Top AI and government officials tell Axios CEO Jim VandeHei that Anthropic, OpenAI and other tech giants will soon release new models that are scary-good at hacking sophisticated systems at scale.
The one to watch: Anthropic is privately warning top government officials that its not-yet-released model — currently branded "Mythos" — makes large-scale cyberattacks much more likely in 2026.
Visit afrikasportnews.co.za for more information.
The model allows agents to work on their own with wild sophistication and precision to penetrate corporate, government and municipal systems. It's a hacker's dream weapon.
- Jim revealed in his new weekly newsletter for CEOs that one source briefed on the coming models says a large-scale attack could hit this year. Businesses are ripe targets. (C-suite only: Request beta of Jim's newsletter.)
Fortune got its hands on an unpublished Anthropic blog post describing Mythos. The post said the model is "currently far ahead of any other AI model in cyber capabilities."
- It adds that Mythos "presages an upcoming wave of models that can exploit vulnerabilities in ways that far outpace the efforts of defenders."
The threat is no longer theoretical, and will be exacerbated by employees testing agents without realizing they're making it easier for cybercriminals to hack their company.
Flashback: Late last year, Anthropic disclosed the first documented case of a cyberattack largely executed by AI — a Chinese state-sponsored group that used AI agents to autonomously hack roughly 30 global targets, with the AI handling 80–90% of tactical operations independently.
- This was before agents got exponentially better and those experimenting with them started to open risky new side doors.
Here's why this is different: The new models are even better at powering agents to think, act, reason and improvise on their own without rest or pause or limitation.
- Think of a warehouse full of the most sophisticated criminals who never sleep, learn on the fly and persist until successful — except the warehouse is infinite.
- Bad actors can now scale simply with more compute. They aren't limited by finite personnel. A single person can run campaigns that once required entire teams.
At the same time, systems are more vulnerable because so many employees are firing up Claude, Copilot or other agentic models — often at home — and creating agents of their own.
- Oftentimes, they connect to their internal work systems unwittingly, opening a new door for cybercriminals to enter.
- The industry has a name for this: "shadow AI." A Dark Reading poll found that 48% of cybersecurity professionals now rank agentic AI as the #1 attack vector for 2026 — above deepfakes, above everything else.
The bottom line: Everyone working at every company in America needs to know right now the dangers of using agents, especially unsupervised, anywhere near sensitive information. Leaders need to hammer this home.
- My tech team says this is the biggest threat to Axios right now.
- Your workplace can build a safe "playpen" for work-related AI experiments using agents. We're scrambling to finish ours.
Go deeper: How Anthropic's Pentagon deal could get revived.