The White House Is Making Up AI Rules as It Goes: Why That's a Problem

The U.S. government is now controlling which AI models get deployed, but it's doing so without any actual laws, clear rules, or legal authority to back it up. According to reporting from the Wall Street Journal, the White House asked Anthropic to stop expanding access to its Mythos model, citing concerns about its cybersecurity capabilities and Anthropic's computing resources. The problem: there's no regulation, no formal process, and no framework governing these decisions.

What Exactly Happened With Anthropic's Mythos Model?

Anthropic, an AI safety company, developed Mythos, a model with capabilities relevant to national security, particularly around cybersecurity. The White House intervened to prevent the company from expanding access to more customers, reportedly worried that the model's powerful hacking-related abilities could end up in the wrong hands and that Anthropic lacked sufficient computing power to serve both commercial customers and the U.S. government simultaneously.

The intervention itself isn't necessarily problematic. As policy analysts have long argued, private companies shouldn't unilaterally decide how to deploy AI systems with national-security implications. The real issue is how the decision was made: informally, without legal backing, and based entirely on what one observer called "vibes." Anthropic could theoretically have ignored the White House's request; the government's only leverage was the threat of a damaged relationship.

"This is what happens in the absence of actual regulation," noted Dean Ball, former AI advisor to the Trump administration, describing the situation as "an informal, highly improvised licensing regime".

Dean Ball, Former AI Advisor to the Trump Administration

Why Does This Matter for AI Governance?

For years, researchers and policy analysts have warned that advanced AI models would eventually become relevant to national security. They've proposed detailed frameworks for how governments should respond. But Congress at both the state and federal level failed to act quickly enough, while Trump administration officials have regularly dismissed the idea of regulation altogether. The result is critical business decisions being made on an ad-hoc basis rather than through established legal channels.

This creates a dangerous precedent. The White House has no specific legal authority to block model deployment, and there are no concrete thresholds for when intervention is justified. Different administrations could make wildly different decisions based on political priorities rather than consistent policy. Trump supporters may welcome this level of executive discretion under the current administration, but the same power in the hands of a future Democratic administration could look very different.

Steps Toward Actual AI Governance Framework

  • Establish Legal Authority: Congress needs to pass legislation that explicitly grants the government authority to review and approve AI model deployments with national-security implications, rather than relying on informal pressure.
  • Create Clear Thresholds: Develop specific, measurable criteria for when government intervention is warranted, such as models demonstrating particular cybersecurity or autonomous weapons capabilities.
  • Build Transparent Processes: Institute formal review procedures with public documentation, appeals processes, and oversight mechanisms to prevent arbitrary decision-making based on political considerations.
  • Coordinate Across Agencies: Establish a unified government approach involving relevant departments like Defense, Homeland Security, and State, rather than ad-hoc White House interventions.

What's Congress Doing About AI Safety?

Meanwhile, Congress is moving slowly on AI legislation. Texas Senator Ted Cruz, widely understood as the White House's point person on AI policy, introduced the CHATBOT Act this week, a child-safety bill requiring chatbot developers to implement parental controls and restrict access for users under 13 to family accounts. The bill has bipartisan support, with Democrats Brian Schatz and Adam Schiff also backing it.

However, the bill contains significant loopholes. It exempts developers from liability if they only have context clues, rather than definitive proof, that a user is underage. Advertising restrictions don't apply to ads appearing after a user submits a prompt. Sources on both sides of the AI safety debate suggest the bill is primarily a messaging effort, unlikely to receive floor time in Congress.

Critics worry that weak legislation like this actually helps the tech industry by narrowing the scope of debate. If Congress passes a narrow child-safety bill, politicians can claim they've addressed AI risks, leaving thornier issues like cybersecurity, autonomous weapons, job automation, and existential risk stuck in partisan gridlock. This may explain why Cruz also voted in support of Senator Josh Hawley's more stringent GUARD Act on Thursday.

Why the Mythos Decision Could Force Congress to Act

The White House's intervention on Mythos might actually be the wake-up call Congress needs. Nothing spurs legislative action like a crisis, and the executive branch unilaterally making decisions about who gets access to dangerous AI capabilities could qualify. This is likely the first time the government has had to decide who gets access to advanced AI with national-security implications, but it won't be the last. The current ad-hoc approach cannot continue indefinitely.

The stakes are high. Without clear legal frameworks, future decisions about AI deployment will remain unpredictable, subject to political winds, and vulnerable to legal challenges. Companies like Anthropic face uncertainty about what the government might demand next. Policymakers lack the tools to make consistent decisions. And the public has no transparency into how these critical choices are being made.

The Mythos situation demonstrates that the era of light-touch AI regulation is over. The question now is whether Congress will establish proper legal frameworks and transparent processes, or whether the government will continue making up rules as it goes along.