Google's Internal Revolt: 600 Employees Push Back Against Pentagon AI Deal

Roughly 600 Google employees have signed an open letter urging CEO Sundar Pichai to reject negotiations with the U.S. Department of Defense over military use of the company's Gemini artificial intelligence models in classified settings. The letter represents one of the most significant internal pushbacks at Google since 2018, when staff protests led the company to revise its AI ethics policies and promise not to pursue AI developments "likely to cause harm."

What Are Google Employees Worried About?

The signatories, mostly staff working directly with Google's AI systems, expressed deep concern about ongoing negotiations between Google and the Pentagon. They worry that allowing the DOD to use Gemini for "all lawful purposes" could enable surveillance, weapons development, and other harmful applications that contradict Google's stated values.

The letter captures a fundamental tension in the tech industry: companies building powerful AI tools face pressure from both governments seeking military applications and employees who fear those same tools could be weaponized or used to violate civil liberties. The employees emphasized that their proximity to the technology creates a responsibility to prevent its most dangerous uses.

"As people working on AI, we know that these systems can centralize power and that they do make mistakes. We feel that our proximity to this technology creates a responsibility to highlight and prevent its most unethical and dangerous uses," the letter stated.

Google employees, signatories to open letter

The employees also warned that accepting a military contract could cause "irreparable damage to Google's reputation, business and role in the world," particularly given that human lives are already at risk from misuses of AI technology both domestically and abroad.

How Is This Different From What Other AI Companies Are Doing?

Google is not alone in navigating military AI partnerships, but the company's approach differs from competitors in important ways. Anthropic PBC, the AI safety company behind Claude, took a harder line and is currently in a legal dispute with the Pentagon after negotiations broke down over a $200 million contract. Anthropic refused to allow the DOD to use Claude for "all lawful purposes," and the government subsequently designated the company as a "supply chain risk."

OpenAI Group PBC took a middle path, revising its Pentagon deal to include specific restrictions. The revised contract now prohibits the use of OpenAI's AI for "deliberate tracking, surveillance, or monitoring of U.S. persons or nationals, including through the procurement or use of commercially acquired personal or identifiable information."

Google's employees are pushing the company toward Anthropic's position, arguing that contractual language alone cannot guarantee safety. They contend that the only reliable way to prevent misuse is to "reject any classified workloads" entirely.

They

Steps to Understand Google's Shifting AI Ethics Position

  • 2018 Protests: Google staff protested military AI use, leading the company to promise it would not pursue AI developments "likely to cause harm" and would not "design or deploy" AI tools for weapons or surveillance.
  • Policy Evolution: Google's AI Principles have been revised since 2018, with language becoming less restrictive and more open to military partnerships under certain conditions.
  • Current Negotiations: Google is now in active talks with the DOD about using Gemini in classified settings, suggesting the company may be moving away from its earlier ethical commitments.
  • Employee Resistance: The new 600-person letter signals that internal opposition to military contracts remains strong, even as leadership appears more willing to engage with the Pentagon.

The timing of this letter is significant. It comes as the AI industry faces mounting pressure to support U.S. military and intelligence operations in competition with China. Yet Google's own workforce is signaling that this pressure should not override ethical concerns about how AI could be misused.

Sundar Pichai now faces a critical decision: honor the concerns of hundreds of his own employees or pursue a lucrative and strategically important military contract. The choice will likely shape how other tech companies approach similar negotiations and could influence the broader conversation about AI ethics in the defense sector.