A new internal research preview suggests a major leap in artificial intelligence capability is already underway.
Leaked documents describe a model called Claude Mythos as the most powerful system ever developed by Anthropic, signaling a potential step change in performance across multiple domains.
The model is positioned as a new tier beyond the company’s previous flagship systems.
“Mythos is a new name for a new tier of model: larger and more intelligent than our Opus models—which were, until now, our most powerful.”
The documents indicate that performance gains extend across technical and reasoning-heavy tasks.
“Compared to our previous best model, Claude Opus 4.6, Mythos gets dramatically higher scores on tests of software coding, academic reasoning, and cybersecurity, among others.”
The release strategy reflects caution around both capability and cost.
“Mythos is also a large, compute-intensive model. It’s very expensive for us to serve, and will be very expensive for our customers to use.”
The company is limiting early access as it evaluates real-world risks and applications.
“For those reasons, we’re taking a slower, more gradual approach to releasing Mythos than we have with our other models. We’re beginning with a small number of early-access customers, who will explore the model’s cybersecurity applications and report back what they find.”
The documents point to cybersecurity as an early focal point for deployment.
“Although Mythos is currently far ahead of any other AI model in cyber capabilities, it presages an upcoming wave of models that can exploit vulnerabilities in ways that far outpace the efforts of defenders.
That’s why our release plan for Mythos focuses on cyberdefenders: we’re releasing it in early access to organizations, giving them a head start in improving the robustness of their codebases against the impending wave of AI-driven exploits.”
The exposure itself was uncovered by Fortune reporter Bea Nolan, who identified a large cache of internal files tied to Anthropic’s blog.
Two cybersecurity researchers independently reviewed the materials at Fortune’s request.
Roy Paz of LayerX Security and Alexandre Pauwels of the University of Cambridge examined the dataset, with Pauwels estimating that nearly 3,000 files had not appeared on any public-facing Anthropic pages.
Anthropic says the exposure stemmed from a configuration issue in an external content management tool, describing it as “human error,” and restricted access to the data store after being contacted.
Disclaimer: Opinions expressed at CapitalAI Daily are not investment advice. Investors should do their own due diligence before making any decisions involving securities, cryptocurrencies, or digital assets. Your transfers and trades are at your own risk, and any losses you may incur are your responsibility. CapitalAI Daily does not recommend the buying or selling of any assets, nor is CapitalAI Daily an investment advisor. See our Editorial Standards and Terms of Use.

