Sam Altman Says He Misjudged Public Distrust of AI and Government, Argues for Stronger Government Power
Sam Altman has publicly acknowledged that he underestimated how deeply Americans distrust both AI companies and government institutions working together. In a recent podcast interview, the OpenAI CEO said he had "miscalibrated" the mood surrounding AI and government collaboration, particularly after OpenAI's February deal with the Pentagon to deploy AI models on classified military networks .
Why Did Sam Altman's Pentagon Deal Spark Such Strong Backlash?
When OpenAI announced its partnership with the Pentagon in February, the move triggered significant public protests. The deal allowed the military to use OpenAI's AI models on classified networks, stepping into a standoff between the Pentagon and Anthropic, another major AI company. Despite Altman's efforts to address concerns by promising the technology would not be used for autonomous weapons or domestic surveillance, opposition continued .
Altman told Laurie Segall, CEO of Mostly Human, that he had not fully grasped how much skepticism existed among the public. "There's at least a group of loud people online who really don't trust the government to follow the law," Altman explained, "and that feels like a very bad sign for our democracy" .
Altman
What's Altman's Argument for Why AI Companies Must Work With Governments?
Despite the backlash, Altman remains convinced that AI companies have a responsibility to collaborate with government on critical national security issues. He argued that refusing to help the government would be harmful to the country's future. "If we don't help them with defending the cyber infrastructure of the US, if we don't help them with biodefense, I think it's really bad," Altman stated. "I think we have to work with the government" .
Altman's core argument centers on a fundamental question about power and governance in the AI age. He believes that governments, not AI companies, should make decisions about the future of AI technology and national security. "One of the most important questions the world will have to answer in the next year is: Are AI companies or are governments more powerful? And I think it's very important that the governments are more powerful," Altman told Segall .
To support this position, Altman pointed to historical precedents of large-scale government-led technological achievements. He cited the Manhattan Project, which developed nuclear weapons during World War II; the Apollo Program, which put humans on the moon; and the Interstate Highway System, which transformed American infrastructure. These examples, he argued, demonstrate that governments are capable of leading transformative technological efforts .
How Should Democratic Processes Shape AI's Future?
Altman emphasized that decisions about AI's role in national security should not rest with private company executives like himself. "The future of the world and the decisions about the most important elements of national security should be made through a democratically elected process," he said, "and the people that have been appointed as part of that process, not me, and not the CEO of some other lab" .
Altman
Segall, who has covered Altman for over a decade, noted that his stance on government oversight has become more pronounced as AI technology has grown more powerful. She told Business Insider that Altman "really kind of dug in his heels" on the idea that governments must play a dominant role in AI oversight, especially as companies like OpenAI currently make key decisions about how the technology is deployed .
She
The broader tension Segall identified reflects a societal anxiety about AI's future. "I think what we're sensing now as a society is this tension between: Will artificial intelligence be good for all of us, or will it just be good for some of us?" she explained .
Steps to Strengthen Government-AI Collaboration
- Democratic Oversight Mechanisms: Establish formal processes where elected officials and appointed representatives, rather than private company leaders, make decisions about AI deployment in national security contexts.
- Transparent Safeguards: Create clear guidelines and public accountability measures to address concerns about autonomous weapons and domestic surveillance, ensuring the public understands how AI is being used.
- Public Trust Building: Conduct outreach to address the "group of loud people online" who distrust government, demonstrating that institutions can follow the law and use AI responsibly.
Altman's shift in perspective reflects a broader recognition within the AI industry that public skepticism about government-AI partnerships is not a minor concern but a fundamental challenge to the legitimacy of these collaborations. His acknowledgment that he "miscalibrated" public sentiment suggests that AI leaders may need to invest more effort in understanding and addressing democratic concerns before pursuing similar partnerships in the future .