Anthropic just did something almost no major AI company has done in years: it built a model and decided not to release it publicly. It's the first time in nearly seven years that a leading AI company has so publicly withheld a model over safety concerns. The model in question is Claude Mythos Preview, and it's capable enough that Anthropic believes putting it in the wrong hands could be catastrophic.
On April 8, 2026, Anthropic announced Project Glasswing, a sweeping cybersecurity initiative that pairs an unreleased frontier AI model, Claude Mythos Preview, with a coalition of twelve major technology and finance companies in an effort to find and patch software vulnerabilities across the world's most critical infrastructure before adversaries can exploit them.
The stakes here are not theoretical. Claude Mythos has already found thousands of vulnerabilities in every major OS and web browser, and Anthropic warns that if these capabilities spread to bad actors, the consequences could be severe for economies, public safety, and national security.
What Is Project Glasswing?
Project Glasswing is an initiative to secure the world's most critical software for the AI era, partnering with the organizations responsible for the infrastructure billions of people depend on.
The initiative brings together Amazon Web Services, Anthropic, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorganChase, the Linux Foundation, Microsoft, NVIDIA, and Palo Alto Networks as launch partners. Beyond those twelve, Anthropic said that over 40 additional organizations building critical infrastructure have received access to the model.
Anthropic formed Project Glasswing because of capabilities observed in a new frontier model that the company believes could reshape cybersecurity. Claude Mythos Preview is a general-purpose, unreleased frontier model that reveals a stark fact: AI models have reached a level of coding capability where they can surpass all but the most skilled humans at finding and exploiting software vulnerabilities.
What Makes Claude Mythos Preview So Capable?
This isn't just a better version of an existing coding model. Logan Graham, who leads offensive cyber research at Anthropic, said the Mythos Preview model was advanced enough not only to identify undiscovered software vulnerabilities but also to weaponize them. The model can single-handedly perform complex, effective hacking tasks, including identifying multiple undisclosed vulnerabilities, writing code to exploit them, and chaining those together to penetrate complex software.
Over the past few weeks, Anthropic used Claude Mythos Preview to identify thousands of zero-day vulnerabilities, many of them critical, in every major operating system and every major web browser, along with a range of other important pieces of software. Zero-day means these were flaws previously unknown to the software's own developers.
In testing, the model found several vulnerabilities in the Linux kernel that would allow hackers to gain complete control over machines, and also uncovered a 27-year-old vulnerability in OpenBSD, one of the most security-hardened operating systems in the world.
When tested against CTI-REALM, Microsoft's open-source security benchmark, Claude Mythos Preview showed substantial improvements compared to previous models.
Key Technical Highlights
- Partners receive access to Claude Mythos Preview to find and fix vulnerabilities in their foundational systems, with work focusing on local vulnerability detection, black box testing of binaries, securing endpoints, and penetration testing.
- Project Glasswing partners and approved participants can access the model through the Claude API, Amazon Bedrock, Google Cloud's Vertex AI, and Microsoft Foundry.
- After the research preview period, Claude Mythos Preview will be available to participants at $25 per million input tokens and $125 per million output tokens.
- Because of the sensitive nature of vulnerabilities, Anthropic will disclose currently opaque vulnerabilities within 135 days of sharing them with the organizations responsible for the software.
- Anthropic has donated $2.5 million to Alpha-Omega and OpenSSF through the Linux Foundation, and $1.5 million to the Apache Software Foundation.
Funding and Financial Commitments
Anthropic will commit $100 million in usage credits and $4 million in funding to support open-source security efforts tied to Project Glasswing.
Open-source maintainers, whose software underpins much of the world's critical infrastructure, have historically been left to figure out security on their own. Open-source software constitutes the vast majority of code in modern systems, including the very systems AI agents use to write new software. By giving maintainers of these critical codebases access to AI models that can proactively identify and fix vulnerabilities at scale, Project Glasswing offers a credible path to changing that equation.
Maintainers interested in access can apply through Anthropic's Claude for Open Source program.
What the Industry Is Saying
The partner reactions make clear this isn't just a PR coalition. Nikesh Arora, CEO of Palo Alto Networks, framed the shift directly: "By prioritizing defensive access to these powerful capabilities, Anthropic is helping us ensure that while intelligence is being weaponized, the defenders are the ones with the superior stack. AI becomes the defender."
From CrowdStrike's perspective, the window between a vulnerability being discovered and being exploited by an adversary has collapsed. What once took months now happens in minutes with AI, and adversaries will inevitably look to exploit the same capabilities.
Not everyone is fully convinced, though. Heidy Khlaaf, chief AI scientist at the AI Now Institute, said Anthropic's detailed blog post left out many key details needed to verify its claims, and warned against "taking these claims at face value" without clearer explanations for how humans conducted manual reviews of the identified vulnerabilities.
Why Anthropic Is Doing This Now
Project Glasswing appears to be an attempt by Anthropic to demonstrate that, despite downgrading its Responsible Scaling Policy earlier this year, it remains committed to responsible AI. The project comes nearly two weeks after a data leak released limited details about Mythos to the public.
The project signals an ambition to move beyond being solely an AI model developer and toward becoming a foundational provider of security infrastructure. The scale of the initiative, the partner roster, and the significant financial commitments suggest a broader goal: Anthropic is attempting to position itself as a company capable of shaping how frontier AI is integrated into the defensive security space.
Given the rate of AI progress, it will not be long before such capabilities proliferate, potentially beyond actors who are committed to deploying them safely. The fallout for economies, public safety, and national security could be severe.
Final Thoughts
What stands out to me about Project Glasswing isn't just the model capabilities, it's the structural choice Anthropic made. They built something powerful enough to crack open every major OS and browser, then decided the right move was to hand it exclusively to defenders. That's a real tradeoff with real consequences, and it's worth taking seriously rather than treating as marketing.
The open-source angle is the part I'd watch most closely. Anthropic's stated goal is for AI-augmented security to become "a trusted sidekick for every maintainer, not just those who can afford expensive security teams." If the $4M in direct donations and the credits program actually flow to underfunded open-source projects, that's a meaningful intervention. If they mostly benefit the twelve already-wealthy launch partners, the initiative looks considerably thinner.
As Anthropic itself acknowledges, no one organization can solve these cybersecurity problems alone: frontier AI developers, other software companies, security researchers, open-source maintainers, and governments across the world all have essential roles to play. Project Glasswing is a credible opening move, but the follow-through over the next 90 days will tell us whether this is a real commitment or a well-funded announcement. What do you think? Drop your thoughts in the comments.
FAQ
What is Project Glasswing?
Project Glasswing is an initiative by Anthropic to secure the world's most critical software for the AI era, built around restricted access to its unreleased AI model, Claude Mythos Preview.
Why isn't Claude Mythos Preview publicly available?
Anthropic stated clearly they do not plan to make Claude Mythos Preview generally available, citing the potential for misuse given the model's ability to surpass all but the most skilled humans at finding and exploiting software vulnerabilities.
Who are the launch partners in Project Glasswing?
The project initiative unites twelve major leaders including Amazon Web Services, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorganChase, the Linux Foundation, Microsoft, NVIDIA, and Palo Alto Networks.
How much is Anthropic investing in Project Glasswing?
Anthropic is committing up to $100M in usage credits and $4M in donations to open-source security organizations to support this work.
How can developers or open-source maintainers get access to Claude Mythos Preview?
Maintainers interested in access can apply through Anthropic's Claude for Open Source program. After the research preview period, access will be available at $25 per million input tokens and $125 per million output tokens.




