A previously undisclosed artificial intelligence model from Anthropic, tentatively named “Claude Mythos,” has come to light following a data leak. The company acknowledged the existence and ongoing testing of this advanced AI, describing it as a significant leap in capabilities. Details surrounding Mythos were inadvertently stored in a publicly accessible data cache, which included a draft blog post detailing its potential, along with internal discussions about its cybersecurity implications.
Anthropic representatives have characterized Mythos as a “step change” in AI performance, claiming it is the most capable model the company has developed to date. They confirmed it is currently undergoing trials with a select group of “early access customers.” The leaked documents also revealed plans for an invite-only CEO summit in Europe, indicating Anthropic’s strategy to market its AI models to large corporate clients. The company stated that the model introduces “meaningful advances in reasoning, coding, and cybersecurity,” and emphasized a deliberate approach to its release given the strength of its capabilities.
This development unfolds as the broader AI landscape experiences rapid shifts and increasing scrutiny. A federal judge recently issued a temporary injunction against the Trump administration’s ban on Anthropic, with the ruling characterizing the Pentagon’s designation of the company’s AI as a supply chain risk as “Orwellian.” This legal battle underscores the complex regulatory environment in which AI developers operate, particularly those engaged with government contracts or national security implications.
Elsewhere in the industry, other major players are navigating their own challenges and opportunities. Apple, for instance, is reportedly preparing to integrate third-party AI tools, including Anthropic’s Claude and Google’s Gemini, into its Siri assistant. This move, expected with the iOS 27 software update in June, represents a strategic pivot for Apple, known for its tightly controlled ecosystem. While Apple currently routes some Siri requests through OpenAI’s ChatGPT, the expanded integration could allow users more choice in their AI interactions and potentially boost Apple’s services revenue if the company opts to charge AI developers for direct Siri access. The iPhone maker’s struggles in developing its own in-house AI have reportedly compelled these partnerships, including a separate collaboration with Google for its “Apple Intelligence” features.
Meanwhile, OpenAI has paused its plans to launch an erotic chatbot, a decision that followed internal unease among employees and investors regarding the potential societal impact of sexualized AI. Concerns ranged from the risk of users forming unhealthy emotional attachments to the possibility of minors accessing explicit content. The company confirmed its intention to prioritize long-term research into user interaction with sexually explicit AI before proceeding. This pause reflects a broader strategic realignment for OpenAI, focusing on core offerings like ChatGPT and developer tools, rather than projects such as its video model, Sora. Technical hurdles, particularly in safely training models to handle explicit material while filtering harmful content and ensuring reliable age verification, also contributed to the decision.
In the defense sector, AI-powered solutions are attracting significant investment. Shield AI, a developer of AI-powered drones, recently secured $1.5 billion in Series G funding, pushing its valuation to $12.7 billion. The company, founded in 2015, projects substantial revenue growth, with co-founder Brandon Tseng attributing this expansion to increased government and investor focus on modernizing military forces amidst global conflicts. This substantial investment highlights the growing demand for advanced AI applications in critical national security contexts, contrasting with the more consumer-facing ethical dilemmas faced by companies like OpenAI.