Microsoft announced Wednesday it will embed Anthropic's Claude Mythos Preview into its Security Development Lifecycle (SDL) framework, formally adopting the model for vulnerability detection across its enterprise software portfolio. The move, reported by Reuters, makes Microsoft one of the first major technology companies to integrate the restricted AI model directly into its software engineering process.
What Mythos Preview Does Inside the SDL
Microsoft's SDL is the company's internal framework for building security into software from the earliest stages of development. Incorporating Mythos Preview is intended to accelerate both the identification of vulnerabilities and the development of fixes, pushing that work earlier in the development cycle before flaws can compound into larger exposures. The company has not disclosed which product lines will receive SDL reviews powered by Mythos first, but the scope covers the enterprise software portfolio broadly.
Anthropic has described Mythos Preview as "currently far ahead of any other AI model in cyber capabilities." That is an unusually direct claim from a company that has otherwise been cautious about how it characterizes the competitive landscape.
Project Glasswing | The Restricted Initiative Behind Mythos
The integration is part of Anthropic's Project Glasswing, a restricted program announced on April 7. The initiative brings together eleven organizations including Amazon Web Services, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorgan Chase, the Linux Foundation, Microsoft, Nvidia, and Palo Alto Networks. The stated goal is to secure critical software infrastructure using Mythos's capabilities under controlled conditions.
Anthropic has been explicit that Mythos Preview will not be made generally available. The company cited the risk that adversaries could weaponize the model's capabilities, a concern that informed its decision to limit access to vetted institutional partners only.
Thousands of Zero-Days Found in Pre-Release Testing
In a draft blog post inadvertently made public in March, Anthropic warned that Mythos "presages an upcoming wave of models that can exploit vulnerabilities in ways that far outpace the efforts of defenders." The language was stark even by the standards of AI safety disclosures.
During pre-release testing, the model identified thousands of zero-day vulnerabilities across major operating systems and web browsers. The oldest was a 27-year-old bug in OpenBSD, a security-focused operating system that has historically maintained one of the strongest vulnerability records in the industry. That Mythos surfaced a flaw that had gone undetected for nearly three decades in OpenBSD is the kind of result that explains why Anthropic chose to restrict access rather than release the model commercially.
Microsoft's Bet | AI-Driven Security Before Adversaries Catch Up
For Microsoft, the integration is a calculated hedge. The company runs the world's largest commercial software portfolio and has faced sustained criticism over its security posture in recent years, including findings from the Cyber Safety Review Board following high-profile nation-state intrusions. Using a model with Mythos's detection capabilities internally, before adversaries have equivalent tools, is the clearest argument for Project Glasswing's restricted model.
Anthropic's framing cuts both ways. If Mythos can find thousands of zero-days that defenders missed for decades, it can also be used offensively by any actor who gains access to it. The Project Glasswing structure is designed to hold that line, at least for now.
For broader context on how AI models are reshaping enterprise security, see ObjectWire's OpenAI coverage and our Microsoft hub.