Why AI Safety Activists Are Losing the Public to Radical Movements
The AI safety movement has historically operated behind closed doors with researchers and policymakers, but this strategy may be backfiring as public anxiety about artificial intelligence grows increasingly disconnected from expert discourse. A recent attack on OpenAI CEO Sam Altman's home, allegedly motivated by existential AI concerns, exposed a dangerous gap: while mainstream Americans worry about job loss and surveillance, the technical AI safety community remains focused on abstract risks like superintelligence. Without a broader coalition channeling public frustration into constructive political action, experts warn that radicalized movements may fill the void.
What's Driving the Disconnect Between AI Safety Experts and the Public?
In March 2024, a small protest gathered outside Anthropic's San Francisco office to rally against the AI race. The crowd, numbering between a few dozen and a couple hundred people, carried esoteric signs like "IT'S SMART ENOUGH" and "PAUSE IS DEMANDED if you aren't CONSISTENTLY CANDID." Just weeks later, a 20-year-old named Daniel Alejandro Moreno-Gama allegedly threw a Molotov cocktail at Sam Altman's mansion and attempted to break into OpenAI headquarters. In his backpack, officers found a manifesto listing the names and home addresses of AI executives.
What made the incident particularly revealing was the public reaction. While AI-informed circles condemned the violence, social media users celebrated it. Instagram comments included "Where can we support their bail fund?" and "New love language just dropped." This vibe mismatch reflected a deeper problem: the AI safety community's messaging was not resonating with ordinary people, but the underlying anxiety about AI was very real.
Instagram
A recent NBC News poll found that more voters feel worse about AI than about ICE, the immigration enforcement agency. This suggests mainstream concerns about artificial intelligence run deep and widespread. Yet the issues keeping ordinary Americans awake at night differ sharply from what AI safety researchers prioritize.
Which Concerns Matter Most to Everyday Americans?
The disconnect stems from a fundamental mismatch in priorities. Mainstream anxieties about AI center on immediate, tangible threats, while the AI safety community has historically focused on abstract, long-term risks. The public's top concerns include:
- Job Loss: Automation and AI replacing human workers across industries remains a primary concern for voters and workers nationwide.
- Cyberattacks: AI-powered hacking and security breaches pose immediate risks to personal data and critical infrastructure.
- Mass Surveillance: AI systems enabling unprecedented monitoring of citizens by governments and corporations threatens privacy and autonomy.
- Gradual Disempowerment: The slow erosion of human control and decision-making authority as AI systems become more autonomous.
Interestingly, these mainstream concerns are not entirely separate from existential risk (x-risk) thinking. As one political advocate noted, addressing present-day harms may actually be the gateway to getting existential risks on the legislative table.
"These are the things that people are feeling right now. It doesn't mean that they don't believe in Skynet," said Alex McCoy, Head of Left Coalition at political advocacy group Humans First.
Alex McCoy, Head of Left Coalition at Humans First
How the AI Safety Movement Built Its Own Isolation
The AI safety field was constructed on a specific assumption: that a small group of very smart people, armed with enough compute, money, and brainpower, could solve the problem quietly without public input. This strategy made sense at the time. Researchers worried that public attention might inadvertently spark a race toward superintelligence, making the problem worse. They reasoned that preventing tech companies from building a "deadly machine god" required secrecy, not transparency.
For years, AI safety discourse unfolded largely behind closed doors. Long, information-dense blog posts, closed-door meetings, and direct access to policymakers carried more weight than mainstream media coverage. Why explain "AGI" (artificial general intelligence) to ordinary people when policy proposals were being drafted in private rooms? This logic seemed sound at the time, but it created a critical vulnerability.
The strategy wasn't entirely paranoid. Philosopher Nick Bostrom's 2014 bestseller "Superintelligence: Paths, Dangers, Strategies" brought existential AI risks from the rationalist blogosphere to the New York Times bestseller list and partially motivated OpenAI's founding. Public attention to x-risk was seen as dangerous. But in choosing to operate largely behind the scenes, the AI safety community inadvertently created a vacuum.
Why Public Engagement Might Be the Only Path Forward
The Molotov cocktail attack and its celebration by mainstream social media users revealed something uncomfortable: the public's frustration with AI development is real, widespread, and increasingly unmoored from expert guidance. When the AI safety community fails to channel that frustration into constructive political action, it gets channeled into something else entirely.
A radicalized young man who believes superintelligence will kill everyone might see attacking a CEO's house as utilitarian logic. If a trolley is approaching a fork in the tracks, utilitarian reasoning suggests pulling a lever to send it toward one victim instead of many. To someone convinced that AI development leads to human extinction, burning down a company headquarters might seem like the lesser of two evils.
The AI safety community has historically worried that addressing mainstream concerns about job loss and surveillance would come at the expense of x-risk messaging and potentially knock existential risks off the legislative agenda entirely. But the opposite may be true. Politicians respond to what they believe their constituents want, and the vast majority of Americans do not want AI to continue along its current trajectory. The momentum is there; people are beginning to take action, however imprecisely, driven by deeply rooted feelings of unfairness and demoralization.
"Traditional AI safety advocates may just need to cede enough control of their narrative to harness it," the analysis noted.
Celia Ford, Transformer News
The path forward requires a strategic shift. Rather than dismissing mainstream concerns as distractions from x-risk, the AI safety community could use them as a foundation for broader coalition building. Job loss, surveillance, and loss of control are not separate from existential risk; they are stepping stones toward it. By acknowledging and addressing the concerns that resonate with ordinary Americans, AI safety advocates could build the political power necessary to influence how AI development actually proceeds.
The alternative is clear: without a broadly appealing coalition, the narrative around AI risk will be shaped by those willing to act most radically, not those with the most expertise. The AI safety movement's greatest challenge may not be technical at all, but political and social. Building a movement that includes "normies" is not a compromise of AI safety principles; it may be the only way to actually implement them.