Ptechhub
  • News
  • Industries
    • Enterprise IT
    • AI & ML
    • Cybersecurity
    • Finance
    • Telco
  • Brand Hub
    • Lifesight
  • Blogs
No Result
View All Result
  • News
  • Industries
    • Enterprise IT
    • AI & ML
    • Cybersecurity
    • Finance
    • Telco
  • Brand Hub
    • Lifesight
  • Blogs
No Result
View All Result
PtechHub
No Result
View All Result

Anthropic’s Mythos raises the stakes for security validation | Computer Weekly

By Computer Weekly by By Computer Weekly
April 21, 2026
Home Uncategorized
Share on FacebookShare on Twitter


A security team recently walked me through a scenario that illustrates exactly why the industry’s current obsession with autonomous AI is so risky. They had used an agentic tool to uncover a complex attack path that started with a small foothold and ended in a critical exposure. It was a clear win for discovery. They remediated the gaps and restricted access, expecting the issue to be closed.

The trouble started when they went back to prove the fix. Because the tool was driven by a probabilistic model designed to explore and pivot like a human, it didn’t take the same path twice. When the original path didn’t show up, the team couldn’t tell if the hole was plugged or if the system had simply chosen a different route. That kind of unnecessary doubt is the hidden tax of the push toward total autonomy.

That doubt, in a single environment, is the manageable version of the problem. Earlier in April Anthropic demonstrated what it looks like when the attacker is an AI. Claude Mythos autonomously discovered and chained zero-day vulnerabilities across major operating systems, producing working exploits in hours. That would have taken elite researchers weeks. Anthropic withheld public release for good reason, but the implication is already here: disclosure now equals weaponisation.

That puts a sharper point on a question security teams were already wrestling with: namely, how do you validate your defences when the threat keeps changing? How do you know your security controls work and remediate whatever falls short, before these gaps are exploited?

Security validation has always depended on predictability. If you know how attackers operate, you can test your defenses against those methods and know where you stand – that’s the difference between knowing your defenses work and hoping they do. Historically, attacker behavior followed well-documented patterns and techniques, which is what made that testing reliable. AI is beginning to change that predictability, giving attackers the ability to reason about novel paths at machine speed. But even before novel attacks become routine, AI already offers attackers a more immediate advantage: the ability to execute known techniques at a scale no human team can match, covering more of the attack surface faster than the environment changes.

Defenders are responding in kind and agentic security tools are gaining traction. The most meaningful risks today rarely come from an unpatched server. They come from the connective tissue of the enterprise, where lateral paths are created by service accounts, trust relationships or a set of permissions that made sense once but no longer do. Systems that can piece these together get us closer to how real attacks happen.

But this shift introduces a fundamental conflict between exploration and validation. Agentic systems are designed to explore, not to repeat. In cyber security, that is what makes them effective for discovery, but it is also what makes them a liability for remediation. They can tell you what could happen, but not whether something has actually been fixed.

Answering that requires deterministic execution. It means executing the same techniques, with the same conditions, in a strictly repeatable way. It is not about a variation or a similar route. It is about the exact same sequence so the outcome can be compared directly. Without that, you are operating on assumption, not confidence.

The real challenge is meeting user expectations for safety and accountability. People now want systems that behave like agents working on their behalf, but they also expect the vendors building those systems to take responsibility for the outcomes. If a probabilistic model makes a mistake in a live production environment, the customer holds the vendor accountable, not the model provider.

What is emerging is a two engine architecture where agentic techniques and deterministic execution work together. Agentic layers handle discovery, surfacing compound exposures that emerge from how systems interact over time rather than from any single misconfiguration. Deterministic engines then take those findings and execute them in a controlled, repeatable way so security teams can verify a fix is real and not just unobserved. Neither layer is sufficient on its own. Discovery without verification leaves you with exactly the doubt problem I opened with. Verification without discovery leaves you testing what you already know, which is not where the real risk lives.

The industry will keep moving toward more autonomous systems. Mythos confirmed that the trajectory is right, and that the pace just accelerated. But for security leaders, the core requirement has not changed. You need to know a threat has been neutralised, not just that it has not shown up recently. Teams running continuous validation are already ahead. But ahead just got redefined. When an adversary can reason about novel attack paths and produce working exploits at machine speed, confidence comes from verification – not from the absence of a finding.

Amitai Ratzon is CEO at Pentera



Source link

By Computer Weekly

By Computer Weekly

Next Post
Optiv CEO Kevin Lynch On Why AI Won’t Displace The Channel

Optiv CEO Kevin Lynch On Why AI Won’t Displace The Channel

Recommended.

TCL prépare l’avenir en présentant ses innovations visuelles avancées et une gamme de produits fondés sur l’IA au Consumer Electronics Show (CES) 2026

TCL prépare l’avenir en présentant ses innovations visuelles avancées et une gamme de produits fondés sur l’IA au Consumer Electronics Show (CES) 2026

December 18, 2025
SuperOps CEO: New M Funding Round Aimed At AI, Global Market Growth

SuperOps CEO: New $25M Funding Round Aimed At AI, Global Market Growth

January 30, 2025

Trending.

Ghost Campaign Uses 7 npm Packages to Steal Crypto Wallets and Credentials

Ghost Campaign Uses 7 npm Packages to Steal Crypto Wallets and Credentials

March 24, 2026
Microsoft Details Cookie-Controlled PHP Web Shells Persisting via Cron on Linux Servers

Microsoft Details Cookie-Controlled PHP Web Shells Persisting via Cron on Linux Servers

April 3, 2026
Openreach Taps Google Cloud AI to Accelerate High-Speed Internet Access and Cut Carbon

Openreach Taps Google Cloud AI to Accelerate High-Speed Internet Access and Cut Carbon

March 25, 2026
Viettel Marks 20 Years of Global Expansion, Overseas Revenue Up 25%

Viettel Marks 20 Years of Global Expansion, Overseas Revenue Up 25%

April 3, 2026
守正笃行:IBM 张榕解码 AI 时代的组织变革与人才之道

守正笃行:IBM 张榕解码 AI 时代的组织变革与人才之道

April 3, 2026

PTechHub

A tech news platform delivering fresh perspectives, critical insights, and in-depth reporting — beyond the buzz. We cover innovation, policy, and digital culture with clarity, independence, and a sharp editorial edge.

Follow Us

Industries

  • AI & ML
  • Cybersecurity
  • Enterprise IT
  • Finance
  • Telco

Navigation

  • About
  • Advertise
  • Privacy & Policy
  • Contact

Subscribe to Our Newsletter

  • About
  • Advertise
  • Privacy & Policy
  • Contact

Copyright © 2025 | Powered By Porpholio

No Result
View All Result
  • News
  • Industries
    • Enterprise IT
    • AI & ML
    • Cybersecurity
    • Finance
    • Telco
  • Brand Hub
    • Lifesight
  • Blogs

Copyright © 2025 | Powered By Porpholio