Anthropic has inadvertently confirmed the existence of its most powerful AI model to date after a configuration error in the company's content management system left close to 3,000 unpublished assets in a publicly searchable data store. Among the exposed materials was a draft blog post describing a model called Claude Mythos, and an alternate version branded as Capybara, which would sit in a completely new tier above Opus, the current top of Anthropic's Haiku / Sonnet / Opus hierarchy.
We have finished training a new AI model: Claude Mythos. It's by far the most powerful AI model we've ever developed.
The draft describes Mythos as "larger and more intelligent than our Opus models," with the name chosen to evoke "the deep connective tissue that links together knowledge and ideas." M1 spotted the leaked draft pages before they were taken down, noting two tab versions, "v1 Mythos" and "v2 Capybara", suggesting Anthropic was still deciding between names for the same underlying model.
BREAKING 🚨: Anthropic is preparing to release new models, Mythos and Capybara, where Mythos is a completely new tier of models, bigger then Opus.
— TestingCatalog News 🗞 (@testingcatalog) March 27, 2026
In the blog post, Anthropic also highlights that this model brings significant cybersecurity risks due to its capabilities. https://t.co/8XqSaIjlVb pic.twitter.com/lr3bGIWrfq
Anthropic confirmed the model to Fortune, calling it a "step change" and "the most capable we've built to date," with meaningful advances in reasoning, coding, and cybersecurity. That last point is where things get complicated: the draft warns that Mythos poses unprecedented cybersecurity risks, describing it as far ahead of any other AI model in cyber capabilities and cautioning that it could enable attacks that outpace defenders.
Interesting excerpt for those hoping for a Mythos release:
— M1 (@M1Astra) March 27, 2026
"Mythos is also a large, compute-intensive model. It’s very expensive for us to serve, and will be very expensive for our customers to use. We’re working to make the model much more efficient before any general release."… https://t.co/09Vzy3X6hA pic.twitter.com/J8NK8cncXZ
This is not hypothetical for Anthropic; the company previously disclosed that a Chinese state-sponsored group exploited Claude Code to infiltrate roughly thirty organizations. Cybersecurity stocks dropped following the leak, and the model's rollout plan reflects this concern: early access is restricted to organizations focused on cybersecurity defense, giving them a head start before broader availability.

Compared to our previous best model, Claude Opus 4.6, Mythos gets dramatically higher scores on tests of software coding, academic reasoning, and cybersecurity, among others.
In preparing to release Claude Mythos, we want to act with extra caution and understand the risks it poses—even beyond what we learn in our own testing. In particular, we want to understand the model's potential near-term risks in the realm of cybersecurity—and share the results to help cyber defenders prepare.
Join community discussion on Dev Mode Discord!
The model is also described as extremely compute-intensive and expensive to serve, with Anthropic stating it needs to become "much more efficient before any general release." This draws a parallel to OpenAI's GPT-4.5, which was similarly costly and never achieved wide adoption at its price point. Whether Mythos ultimately ships under that name, as Capybara, or under something else entirely remains unclear. No public timeline has been given, and Anthropic characterized the exposed documents as "early drafts of content considered for publication." For developers and enterprise customers, the key question is whether this fourth tier will deliver enough of a capability jump to justify what will likely be a premium price, or remain a niche research tool accessible only to well-funded organizations.