The Great Claude Code Gaslight
Anthropic quietly ran a test that yanked Claude Code from its $20/month Pro plan, only for new signups, and updated its public pricing page to reflect the move as if it were permanent. Users caught on fast, Reddit and X lit up, and the company’s head of growth had to scramble to explain it was just a ‘small test on ~2% of new prosumer signups.’ The damage was done. Developers who rely on Claude Code for daily workflows felt betrayed, and the whole episode reeked of a company testing the waters for a plan change it knew would be unpopular, without the guts to announce it upfront.
This isn’t about a feature. It’s about trust. Anthropic’s documentation made the removal look universal, not experimental. If you’re going to pull a bait and switch on 2% of users, don’t update your public site as if you’ve already made the decision. That’s not testing. That’s pretending.
The Real Problem: Compute Is a Lie
Anthropic’s growth head, Amol Avasare, admitted the real issue: ‘Usage has changed a lot and our current plans weren’t built for this.’ Translation: the company sold subscriptions for a product it can’t afford to deliver. Claude Code burns through tokens on long-running agentic workflows, and the $20 tier can’t handle the compute load. So instead of raising prices transparently, Anthropic tries to quietly cut the most compute-heavy feature for new customers.
This is the ugly truth of the AI subscription gold rush. Companies like Anthropic, OpenAI, and Google sell unlimited access to services that have finite, expensive compute resources. When demand spikes, they ration it through opaque limits, rolling outages, and now, silent feature removals. Users are left with a degraded product they paid for in good faith. One Reddit user called it ‘a scam’ after being locked out of Claude Code for days on the Pro plan. He’s not wrong. Anthropic needs to either price its plans honestly or stop pretending ‘Pro’ means full access.
Source: Arstechnica
