Sam Altman Warns AI Has Become 'the Bogeyman': Why Tech Leaders Are Underestimating Public Backlash

Sam Altman has publicly acknowledged that artificial intelligence has become deeply unpopular in America, warning that tech leaders are dangerously underestimating how much the public distrusts the technology. In March, the OpenAI CEO stated plainly: "AI is not very popular in the U.S. right now." This admission comes as new data reveals the scale of public skepticism, with only 26% of Americans holding a positive view of AI, while 46% view it negatively. The gap between what tech executives believe about AI adoption and what ordinary people actually think is widening, creating a potential crisis for the industry's growth trajectory.

Why Is Public Sentiment About AI So Negative?

Altman attributed the widespread disapproval to two primary concerns: rising electricity costs associated with data centers and widespread job losses that employers attribute to AI, "whether or not it really is about AI". These anxieties are deeply rooted in Americans' daily lives. A Stanford Institute for Human-Centered Artificial Intelligence report found that nearly two-thirds of Americans believe AI will result in fewer jobs within the next twenty years, and only 5% expect the number of jobs to increase. This fear persists despite a lack of concrete evidence showing widespread labor displacement from AI so far.

The distrust extends beyond employment concerns. An NBC News poll of 1,000 registered voters revealed that AI is now only more popular than the Democratic party, which had a 22-point net negative rating, and Iran, which had a 53-point net negative rating. This comparison underscores just how toxic the AI brand has become in the American consciousness.

How Are Companies and Employees Responding to AI Skepticism?

The gap between corporate enthusiasm and public concern is creating friction within organizations themselves. A survey of 2,400 knowledge workers found that 29% of employees, and 44% of Gen Z workers specifically, admitted to sabotaging their employer's implementation of AI tools. This represents a form of grassroots resistance that tech leaders may not have anticipated when rolling out AI systems across their workforces.

Major tech companies are feeling the pressure. Snap, the company behind Snapchat, announced in April that it would slash about 1,000 roles, or approximately 16% of its full-time staff, as well as eliminate about 300 roles for which it was planning to hire. While the company attributed these cuts to various factors, the timing coincided with broader concerns about AI-driven job displacement.

Steps Tech Leaders Can Take to Address Public Concerns

  • Prioritize Transparency: Communicate clearly about how AI is being implemented, what jobs it will and won't replace, and what safeguards are in place to protect workers and consumers.
  • Invest in Retraining Programs: Develop and fund education initiatives that help workers transition to new roles in an AI-driven economy, demonstrating commitment to workforce stability.
  • Engage in Meaningful Regulation: Work with policymakers to establish guardrails that address legitimate public concerns rather than resisting oversight entirely.
  • Focus on Human-Centered Applications: Develop AI tools that visibly improve people's lives rather than primarily serving corporate profit margins.

What Do Tech Leaders Get Wrong About AI Adoption?

Snap CEO Evan Spiegel articulated the core problem facing the industry: "I think technology leaders think that folks will just blindly adopt new technology as it comes out," he said in an episode of "Lenny's Podcast." "And I think we're going to enter a period of time where there's going to be a huge amount of societal pushback on a lot of the changes that are coming with AI".

Evan Spiegel

"Humanity is far more important than the technological developments largely because humanity dictates how technology is adopted. A lot of our focus as an industry but more broadly in the world needs to be putting humanity first, making sure that the tools we're developing are advancing humanity's goals in addition to business goals," said Evan Spiegel.

Evan Spiegel, CEO at Snap

Spiegel's own company has managed to navigate AI adoption more successfully than many peers. Snap launched its chatbot "My AI" in February 2023, just months after OpenAI released ChatGPT. The company now boasts over a billion monthly users and grew its subscriber count 71% year-over-year in the last quarter of 2025, reaching more than 25 million paid subscribers. Yet even Spiegel acknowledges the tension between moving quickly and avoiding public panic.

"On the one hand, this is really dangerous and we need people to know this because this is happening and moving quickly," Spiegel explained. "On the other hand, how do you not just freak everyone out and make everyone so afraid of where things are going?" This dilemma captures the fundamental challenge facing the AI industry: how to advance transformative technology while maintaining public trust.

Is the AI Industry's Investment Outpacing Public Acceptance?

Despite public skepticism, Big Tech has poured $700 billion in capital expenditures into AI development. Meanwhile, actual usage continues to grow, with 57% of Americans reporting using AI technology, and 40% using generative AI more frequently than they did a year ago, according to a Brookings Institute report. This disconnect between investment, usage, and sentiment suggests that the industry may be moving faster than society is comfortable with.

The regulatory landscape adds another layer of concern. The Stanford AI Index Report found significant distrust in the U.S. government's ability to regulate AI effectively. The U.S. had the lowest-rated levels of trust in its government to regulate the technology, with less than one-third of participants reporting trust, compared to an average of 54% across all countries. Some AI leaders, including Anthropic CEO Dario Amodei and Geoffrey Hinton, a pioneering AI researcher, have acknowledged that the technology needs higher guardrails to prevent cyber attacks and other risks.

The message from Altman and other tech leaders is clear: the industry cannot assume that technological progress alone will drive adoption. Public sentiment matters, and ignoring it could slow the very growth that companies are betting their futures on. The next phase of AI development will depend not just on technical breakthroughs, but on rebuilding trust with the American public.