Dark Mode Light Mode

Anthropic Faces User Discontent as Claude AI Performance Concerns Mount

Getty

Anthropic, an AI company recently valued at $380 billion, finds itself navigating a wave of user complaints regarding the perceived degradation of its Claude AI models. Developers and frequent users have voiced increasing frustration, citing instances where the AI fails to adhere to instructions, takes inappropriate shortcuts, and commits more errors in complex processes. This feedback surfaces at a critical juncture for Anthropic, which is reportedly preparing for an initial public offering (IPO).

The core of the issue appears to stem from quiet modifications Anthropic implemented, particularly a reduction in Claude’s default “effort” level. This adjustment was reportedly made to conserve tokens, the units of data processed by the model for each request. Processing fewer tokens translates to lower computational power consumption. Speculation has arisen, notably from competitors, that Anthropic might be grappling with a shortage of computing resources. This concern is amplified by the fact that the company has announced fewer multi-billion dollar data center capacity deals compared to some of its rivals, even as adoption of its products has surged in recent months.

Boris Cherny, the Anthropic executive leading the Claude Code product, acknowledged online that the company had lowered Claude’s default effort to “medium.” He stated this was a response to user feedback indicating that the model was previously consuming an excessive number of tokens per task. However, many users contend that Anthropic did not adequately communicate this change. The lack of transparency has fueled accusations that the company is deliberately compromising performance, potentially due to insufficient compute capacity. This perception is particularly damaging for Anthropic, which has cultivated a brand identity around being more transparent and user-aligned than its competitors.

The broader AI industry is currently confronting escalating GPU costs, limitations in data center expansion, and difficult decisions about product prioritization. Demand for “agentic” AI systems is accelerating at a pace that infrastructure struggles to match. While an Anthropic spokesperson has publicly denied that the company degrades its models to manage demand, certain indicators suggest it may be facing more acute constraints. The company has experienced service outages as usage climbed and has imposed stricter usage limits during peak hours, leading to further user dissatisfaction. An internal memo from OpenAI’s revenue chief, reported by CNBC, further suggested that Anthropic made a “strategic misstep” by not securing sufficient compute capacity, operating on a “meaningfully smaller curve” than rivals.

Amidst these performance concerns, Anthropic recently announced the development of a new, more capable model named Mythos, which is reportedly larger and more expensive to operate than its current Opus AI model. While the company stated Mythos is not yet publicly available due to security considerations, some observers question whether Anthropic possesses the necessary compute capacity for a broad rollout. The scrutiny surrounding Anthropic highlights the dynamic nature of the AI market. Just last week, the company revealed its annualized recurring revenue (ARR) had reached $30 billion, a significant jump from $9 billion at the close of 2023.

Much of the user backlash centers on Claude Code, Anthropic’s AI-powered coding tool. Launched in early 2025, Claude Code functions as a command-line agent capable of autonomously reading, writing, and executing code within a developer’s environment. It has seen widespread adoption among individual developers and large enterprise engineering teams for complex coding tasks. A widely circulated GitHub analysis, attributed to Stella Laurenzo, a senior director of AI at AMD, claimed that recent changes have rendered Claude Code “unusable for complex engineering tasks.” Her analysis suggested a shift from a “research-first” approach, where the model gathered extensive context, to a more direct “edit-first” style, leading to more errors and increased user intervention.

Cherny has pushed back against some of this analysis, suggesting a misinterpretation of data and clarifying that while the full “reasoning trace” might no longer be visible to users, the model’s underlying reasoning capability has not been reduced. He also indicated that Anthropic would test defaulting Teams and Enterprise users to a “high effort” setting to provide extended thinking, albeit at the cost of additional tokens and latency. However, users of the Pro versions of Cowork or the desktop Claude are currently unable to alter the default effort level. The ongoing debate underscores the delicate balance AI companies must maintain between performance, resource management, and user transparency as they navigate rapid growth and intense market competition.

author avatar
Jamie Heart (Editor)
Previous Post

Google Chrome Evolution Empowers Users to Build Custom Artificial Intelligence Skills for Daily Tasks

Advertising & Promotions