As the two AI platforms pursue competing initiatives over vulnerability discovery, the question of who will win is the least of security teams’ concerns.
Following the announcement heard-round-the-world last week from Anthropic about its progress on AI-powered vulnerability discovery with Claude Mythos, OpenAI followed up this week with not one, but two, announcements of its own in the space.
Security teams, however, are not wasting any time pondering which horse to bet on.
[Related: Anthropic Claude Mythos Suggests Vulnerability Management Will Soon ‘Break’: Forrester]
OpenAI introduced its Trusted Access for Cyber initiative back in early February, highlighting the usefulness of GPT‑5.3‑Codex for rapidly uncovering software flaws.
However, it was clearly Anthropic’s disclosure of the vulnerability discovery gains being made in its unreleased Claude Mythos model that has gained the lion’s share of attention so far from CISOs and security teams.
In part, that’s because Anthropic simultaneously announced its “Project Glasswing” initiative featuring collaborations with a who’s who of the tech and security industries.
This week, OpenAI responded with the announcement of GPT‑5.4‑Cyber on Tuesday, followed by another update Thursday on Trusted Access for Cyber. The latter announcement disclosed that initiative supporters include Cisco, CrowdStrike, Nvidia, Oracle and Zscaler.
As with Anthropic’s Project Glasswing, the goal of the OpenAI initiative is to “build the trust, verification and accountability needed to make these tools available” the cyber defense teams, OpenAI said in a post.
For those keeping score at home, Anthropic also announced general availability Thursday for Claude Opus 4.7—a model with cyber capabilities that, though useful, are “not as advanced as those of Mythos Preview,” Anthropic said in a post.
Most security leaders and professionals, however, are likely not going to care very much about who is in the lead in the AI vulnerability discovery race.
“That’s the pulse that I’m getting from CISOs,” Presidio’s Dan Lohrmann told me this week.
Instead, security teams are rightfully focusing on what the announcements mean for the threat landscape. Namely: the surge in cyberattacks they will face as soon as attackers get their hands on comparable, or even semi-comparable, capabilities.
Smart CISOs realize “you cannot assume that, somehow, this is a secret that’s going to stay secret,” said Lohrmann, field CISO for public sector at solution provider powerhouse Presidio.
The reality is, while there may be a window of time before attackers can fully tap into Anthropic- or OpenAI-level cyber capabilities, the required shift in patching schedules is going to so severe that organizations will need all the time they can get.
That is, “you need to take immediate action now,” Lohrmann said.
Likewise, Bugcrowd’s Trey Ford pointed out that AI platforms competing around frontier model access doesn’t directly address the far bigger hurdles these models are exacerbating in vulnerability management.
“The bottleneck was never the AI model,” wrote Ford, chief strategy and trust officer at crowdsourced cybersecurity platform Bugcrowd, in email comments provided to media outlets Thursday. The far bigger concern, he wrote, is the massive shortcomings of human-coordinated processes needed to actually remediate the coming swarms of AI-discovered bugs.
This latest phase of the AI platform rivalry is no doubt interesting to watch. But given the unprecedented security challenges that AI is on track to create, according to Ford, the OpenAI vs. Anthropic race is simply “the wrong conversation for security leaders this week.”







