Congress Confronts AI's Existential Risks: What Lawmakers Fear Most About the Technology

A congressional subcommittee convened to discuss artificial intelligence's potential, but the conversation quickly shifted toward existential fears about the technology's unchecked power. Members of Congress aired concerns ranging from federal workers mishandling sensitive data with AI chatbots to the possibility that AI systems could fundamentally alter military decision-making. The House Oversight Committee's subcommittee roundtable on "Artificial Intelligence and American Power" brought together lawmakers, AI company executives, academics, and industry implementers to grapple with how rapidly the technology is advancing .

What Specific AI Risks Are Lawmakers Most Worried About?

The discussion revealed a wide range of concerns that cut across party lines and reflect genuine uncertainty about how to govern emerging technology. Representatives raised issues that span security, ethics, and national defense:

  • Military Autonomy: Rep. John McGuire, R-Va., expressed alarm that AI systems could prevent U.S. military forces from taking lethal actions if the model concluded such actions violated "moral" behavior, potentially compromising national security decisions.
  • Deepfake Pornography: Rep. William Timmons, R-S.C., asked whether it should be illegal for AI systems to use someone's likeness to create pornographic images without consent.
  • Government Data Misuse: Rep. James Walkinshaw, D-Va., raised concerns that federal workers may be using AI chatbots to handle sensitive government data, creating potential security vulnerabilities.
  • Climate and Energy Impact: Rep. Yassamin Ansari, D-Ariz., highlighted concerns about AI's intensive energy usage and its potential effects on climate change.
  • Cybersecurity Threats: Lawmakers openly fretted about disclosures from companies like Anthropic, which recently announced that its Mythos AI model has capabilities so powerful that the company is limiting its use to select customers because of its apparent ability to bypass traditional cybersecurity and hack major institutions including banks, government agencies, and major corporations .

The tone of the discussion reflected a broader anxiety about whether Congress can keep pace with technological change. Rep. Maxwell Frost of Florida, currently the youngest member of Congress, expressed skepticism about the institution's ability to respond effectively.

"I don't have faith in this institution to actually put the common sense guardrails in place. And then we fast forward ten years, and the house is on fire," Frost stated.

Rep. Maxwell Frost, D-Florida

Rep. Dave Min, D-Calif., warned that constituents across the country will soon feel AI's impacts directly, and without proactive thinking, "I fear that we're going to have a revolution on our hands" .

Dave Min, D-Calif

Are Industry Leaders and Experts Reassuring Congress?

The assembled experts and industry leaders offered a mixed message. While they highlighted AI's vast and growing capabilities, they also urged lawmakers to be thoughtful and well-informed when crafting policy. Some experts pushed back against catastrophic scenarios, though they acknowledged real risks.

"I don't think it's going to kill us. At the same time, I do think it's important for the federal government to seriously fund AI safety research. We need to know a lot more about how the models work," said Robert Atkinson, founder of the Information Technology and Innovation Foundation, a technology think tank.

Robert Atkinson, Founder of the Information Technology and Innovation Foundation

Mark Beall, president of government affairs at the AI Policy Network Inc. and a former Pentagon official, warned that Congress risked the country losing its competitive edge on AI if it did not act on key national security concerns . The message was clear: inaction carries its own risks.

Spencer Overton, a George Washington University law professor, addressed the question of whether AI companies are good actors. He argued that while incentives for AI companies "are really what they should be," the responsibility ultimately falls on elected officials.

"Constituents are looking for you, not for companies, to step up and protect them. They're trusting you, the person that they voted for, to do that, as opposed to companies. That's the way the system works, right?" Overton explained.

Spencer Overton, Law Professor at George Washington University

How Can Lawmakers Begin to Address AI Governance?

While the roundtable did not produce specific legislative proposals, experts and lawmakers outlined several areas where Congress should focus its attention:

  • Fund AI Safety Research: Experts emphasized that the federal government must invest in understanding how AI models work at a fundamental level, including their vulnerabilities and failure modes.
  • Establish Clear Legal Frameworks: Congress needs to define what should be illegal, such as using AI to create non-consensual deepfake pornography, and clarify how existing laws apply to AI systems.
  • Coordinate National Security Policy: Lawmakers must work with defense and intelligence officials to ensure AI systems do not compromise military decision-making or create new vulnerabilities in critical infrastructure.
  • Monitor Federal Agency Use: Congress should establish oversight mechanisms to ensure federal workers are not misusing AI chatbots with sensitive government data.

Rep. Eric Burlison, R-Mo., struck a more optimistic note, praising the industry's ability to automate manufacturing and asking what congressional districts should do to attract AI firms for business development. His comments reflected the tension at the heart of the debate: lawmakers want to harness AI's economic benefits while preventing potential harms .

The roundtable discussion underscores a fundamental challenge facing policymakers. AI technology is advancing faster than Congress can legislate, and the stakes are high. Whether the government can develop thoughtful, effective policy before AI systems become too powerful to control remains an open question. What is clear is that lawmakers are no longer dismissing these concerns as speculative; they are treating AI governance as an urgent national priority.