Logo
FrontierNews.ai

Claude's Grey Market Problem: How Hackers Are Selling Anthropic's AI at 90% Off

A thriving grey market in China is undercutting Anthropic's Claude pricing by up to 90 percent, using stolen credentials, fraudulent accounts, and data harvesting to sustain rock-bottom prices that pose serious security risks to developers. According to research published by Oxford China Policy Lab investigator Zilan Qian, proxy networks operating openly on platforms including GitHub, Taobao, and Telegram are reselling Claude API access at roughly 10 percent of official rates, while simultaneously collecting every prompt and response that passes through their servers for resale as training data.

How Are These Proxy Services Keeping Prices So Low?

The supply chain sustaining these discount services operates through a modular system where different operators handle specific stages of the process. Upstream providers bulk-register Anthropic accounts by exploiting free API credits, corporate discounts, or subdividing the $200 Max subscription plans across dozens of users. Some accounts enter the pool at zero cost, purchased using stolen credit card details, according to Qian's investigation.

To defeat Anthropic's identity verification requirements, which now include photo ID and live selfie checks for certain users, the supply chain has recruited real people in lower-income countries to complete verification in person. This approach mirrors tactics observed in the Worldcoin biometric black market, where iris scans harvested in Cambodia and Kenya were sold for under $30.

What Happens When You Use These Proxy Services?

Developers who access Claude through these proxy networks face multiple deceptions and security exposures. German researchers at the CISPA Helmholtz Center for Information Security audited 17 of these proxy services and discovered widespread model substitution. When users requested access to Claude Opus, Anthropic's most capable model, they often received responses from cheaper alternatives instead. In one benchmark test, a proxy service marketed as "Gemini-2.5" scored just 37 percent on a medical knowledge assessment, while the official API scored nearly 84 percent.

The substitution problem extends across Anthropic's entire model lineup. Users requesting Claude Opus may receive responses from Claude Sonnet, Claude Haiku, or even domestic Chinese alternatives like Qwen, with the output fraudulently relabeled to match what the customer paid for.

Steps to Protect Your Data When Using AI APIs

  • Verify Official Channels: Always access Claude and other AI services directly through Anthropic's official website or authorized partners, never through third-party proxy services or grey-market resellers.
  • Audit Your Prompts: Never paste proprietary source code, authentication credentials, API keys, or confidential business logic into any AI service, whether official or third-party, as proxy networks harvest and resell this data.
  • Monitor Account Activity: Regularly review your API usage logs and billing statements for unauthorized access or unusual activity that might indicate compromised credentials.
  • Use Official Rate Limits: Work with Anthropic directly if you need higher usage limits rather than seeking workarounds through unauthorized services that may substitute cheaper models.

The data harvesting problem is particularly acute for developers using Claude's coding agents. These tools routinely pass complete reasoning chains, repository context, and human-verified outputs through to the model. Developers routing that traffic through unvetted proxy servers are essentially sending proprietary source code to third-party servers with no data-handling obligations. Samsung encountered a similar problem in 2023 when its engineers pasted proprietary semiconductor manufacturing data into ChatGPT, inadvertently disclosing confidential information to OpenAI's servers. Proxy services create the same category of risk, but without even the baseline terms of service that major AI providers maintain.

The proxy operators collect every prompt and response passing through their infrastructure. For coding agents, this means complete reasoning chains and human-verified outputs. Several Chinese developers told Qian that the access markup is essentially customer acquisition, and that harvesting those logs is the actual business model. Datasets of Claude Opus reasoning outputs with no clear provenance already circulate on HuggingFace, a popular platform for sharing machine learning models and datasets.

Why Is This Data So Valuable to Competitors?

Proxy-harvested reasoning data is incredibly valuable for a process called distillation, where companies train competing models using outputs from more capable systems. Reasoning outputs can be systematically captured and used to train models that mimic Claude's capabilities at a fraction of the cost. Proxy servers offer the same pipeline at lower effort because paying customers generate the training data voluntarily, without realizing their interactions are being harvested and resold.

Anthropic blocked Chinese-controlled entities from Claude access in September and has since added progressively stricter verification requirements. However, Qian's research suggests each new control has generated a corresponding evasion market rather than reducing overall unauthorized access. The White House accused Chinese entities in late April of running "industrial-scale" distillation campaigns against U.S. frontier models using tens of thousands of proxy accounts. Anthropic disclosed similar activity in February, identifying roughly 24,000 fraudulent accounts linked to Chinese labs, including DeepSeek, Moonshot AI, and MiniMax.

This ongoing cat-and-mouse game between Anthropic's security measures and the proxy operators' evasion tactics highlights a broader challenge facing AI companies. As demand for API access grows and pricing remains a barrier for some users, the incentives for grey-market alternatives remain strong. For developers and organizations relying on Claude, the lesson is clear: the cheapest access often comes with hidden costs in data security and model reliability.