The most recent admission, which got here after weeks wherein Anthropic had initially implied in its communications that nothing was mistaken and that customers have been largely guilty for any efficiency issues and later stated a few of the modifications had been made for customers’ profit, has completed little to calm Anthropic’s clients—a few of whom say they’ve already canceled their subscriptions.
The sensation amongst some customers that Anthropic had been gaslighting them probably undercuts Anthropic’s makes an attempt to market itself as extra clear and aligned with its customers than rival OpenAI. Nor has the admission that there have been efficiency issues completed a lot to quell rampant hypothesis that the corporate is working in need of computing sources and that Anthropic’s efforts to ration treasured computing energy have been the true purpose for the efficiency points.
“Demand for Claude has grown at an unprecedented rate, and our infrastructure has been stretched to meet it, particularly at peak hours,” Anthropic stated in a press release to Fortune. “We are doing everything we can to address this, and we are deeply grateful for our users’ patience.”
The assertion went on to say that “compute is a constraint across the entire industry, and we are scaling our compute rapidly and responsibly—including through a recently announced expansion of our partnership with Amazon and Google, which will bring significant new capacity online in the coming months. Our priority is getting that capacity into our users’ hands as quickly as possible.”
The corporate additionally pushed again on any characterization that it had not been clear with its customers in regards to the points impacting Claude Code. “The Claude Code issues had specific technical causes that we documented in full in our postmortem, and the fixes are now shipped,” the assertion stated.
Anthropic has constructed a lot of its latest success on the loyalty of builders. Its Claude Code device, launched in early 2025, has been standard with solo builders and enterprise engineering groups. The runaway success of the device has helped ship the corporate’s annualized recurring income run price to $30 billion—greater than triple its determine on the finish of final yr. Nevertheless, the weeks-long efficiency decline and the lab’s gradual response to person complaints, in addition to a number of modifications that customers argue quantity to stealth worth hikes, are testing that loyalty.
The controversy might dent Anthropic’s backside line amid an more and more bitter race with rival OpenAI. The problems additionally come at a essential time, with each corporations reportedly gearing up for preliminary public inventory choices later this yr.
Anthropic’s latest admission is prone to improve already widespread hypothesis that the lab could also be affected by a compute strains after use of its merchandise soared prior to now few months.
Past the efficiency points with Claude Code, the AI lab has additionally suffered a collection of outages as utilization has surged, launched utilization cap limits throughout peak hours, and is limiting the rollout of its latest, bigger, and costlier mannequin, Mythos, to a choose group of huge companies. (Anthropic has stated that the mannequin’s cautious rollout is supposed to protect towards safety dangers posed by the mannequin’s unprecedented cyber capabilities.)
The corporate’s rivals have additionally furthered rumors that the lab could also be missing the compute wanted to take care of its latest buyer surge. In an inner memo first reported by CNBC, OpenAI’s income chief claimed Anthropic had made a “strategic misstep” by failing to safe adequate compute, and was “operating on a meaningfully smaller curve” than its rivals. Anthropic has notably introduced fewer multibillion-dollar offers for information heart capability than a few of its rivals equivalent to OpenAI. Whereas different AI corporations are additionally going through compute constraints, Anthropic seems to be in essentially the most troublesome place, having grown far sooner than it seemingly anticipated.
Anthropic declined to reply CNBC’s questions in regards to the memo. Anthropic has additionally publicly said it doesn’t purposely degrade the efficiency of its Claude fashions.
The corporate additionally seems to be testing potential methods to restrict new entry to Claude Code. Earlier this week, Anthropic up to date its pricing web page for some customers to indicate Claude Code as unavailable on the corporate’s $20-a-month Professional plan. Anthropic’s head of progress later stated the change had been a take a look at on round 2% of latest sign-ups, including that utilization patterns had “changed fundamentally” for the reason that plans have been designed. Individually, The Info reported that Anthropic had shifted its enterprise pricing to a consumption-based mannequin, a transfer one analyst estimated might probably triple prices for heavy customers.
Consumer backlash
Anthropic has been coping with vital backlash from a few of its energy customers over Claude Code’s latest efficiency points. A number of have stated they’ve canceled subscriptions; cybersecurity professionals have warned of doubtless dangerously degraded code high quality; and a senior AI govt at AMD has known as the device “unusable for complex engineering tasks.”
Customers have complained of feeling “gaslit” by the corporate’s response to their ongoing complaints in regards to the coding device’s efficiency. One X person stated in response to Anthropic’s latest submit: “After they gaslit users and pretended nothing was wrong, countless complaining from tonnes of people here and elsewhere, cancellations, Anthropic finally admit on the day GPT-5.5 releases there is a problem with Claude.”
“I appreciate the postmortem, but I don’t trust that all issues have been resolved. Claude Code, in general, has been barely usable for me in the past couple of days,” one other stated.
In a submit printed to its engineering weblog on Thursday, Anthropic stated it had traced the issues to 3 distinct modifications. The primary, rolled out on March 4, diminished Claude Code’s default reasoning effort from “high” to “medium” to chop latency—a tradeoff the corporate stated within the weblog submit was the mistaken one. The second change, shipped on March 26, contained a bug that prompted the mannequin to constantly discard its personal reasoning historical past mid-session, making it seem forgetful and erratic, and draining customers’ utilization limits sooner than anticipated. The third, launched on April 16, added a system immediate instruction capping the mannequin’s responses at 25 phrases between device calls—a change Anthropic stated measurably damage coding high quality earlier than it was reverted 4 days later.
Anthropic famous that every one three points have been resolved as of April 20, with the API unaffected all through. On April 23, the corporate reset utilization limits for all subscribers.
The corporate acknowledged customers’ frustration with the device, saying: “This isn’t the experience users should expect from Claude Code.” The lab as additionally promised higher transparency round modifications to Claude Code sooner or later.
Regardless of Anthropic’s public acknowledgement, some customers have taken to social media to specific their frustration with the lab’s preliminary response to customers’ issues about Claude’s efficiency.
“The frustrating part is that the Claude Code team, along with people deep in AI psychosis, have been gaslighting anyone who raises concerns about Claude Code’s recent issues,” Muratcan Koylan, a member of the technical employees at Sully.ai, stated in a submit on X. “When you’re paying a lot of money for a product, and it actually makes your job harder, to the point where people make you start questioning the quality of your own work, it really becomes a problem.”
The backlash dangers pushing a few of Anthropic’s energy customers towards rival OpenAI, whose latest Codex fashions have additionally been standard with builders. On Thursday, OpenAI additionally launched GPT-5.5, its latest AI mannequin, to paid subscribers. The corporate stated it now had 4 million lively Codex customers, 9 million paying enterprise clients, 900 million weekly lively customers on ChatGPT, and greater than 50 million subscribers. Anthropic has not printed comparable person figures. The corporate has disclosed enterprise metrics, together with greater than 300,000 enterprise clients, however has not launched subscriber or lively person numbers. Impartial app and net site visitors web site Similarweb has reported that lively month-to-month customers of Anthropic’s Claude app hit 20 million by the tip of February and that person progress had greater than doubled month over month in March.
The problems with Claude Code seem to have considerably affected the standard of code produced by Anthropic’s instruments prior to now month or so, particularly when put next with OpenAI’s choices.
Analyses from coding safety firm Veracode discovered that Claude Opus 4.7, Anthropic’s latest Claude mannequin, which launched on April 16, launched a vulnerability in 52% of coding duties examined—up from 51% for Opus 4.1 and 50% for the lower-cost Claude Sonnet 4.5. Veracode discovered OpenAI’s fashions carried out notably higher, introducing vulnerabilities in round 30% of duties.
Dave Kennedy, CEO of cybersecurity agency TrustedSec and a former U.S. Marine Corps intelligence officer, informed Forbes his crew had measured a 47% drop in Claude’s code high quality, monitoring defects, safety points, and activity completion charges. The danger, Kennedy warned, is that novice builders utilizing Claude received’t catch the issues, “introducing serious defects” into manufacturing code.
In response to the latest submit from Anthropic, Kennedy stated: “I’m glad they are trying to address this, but a month to get this out is crummy.”
