Logo
FrontierNews.ai

The Common Sense Problem: Why Self-Driving Cars Still Struggle With the Unexpected

Self-driving cars excel at routine driving tasks but struggle when faced with unusual, unpredictable situations that humans handle instinctively. At the Financial Times Future of the Car conference, Igal Raichelgauz, chief executive of Autobrains, highlighted a critical gap in current autonomous driving systems: the lack of "common sense" reasoning needed to navigate unexpected road hazards.

The warning comes on the heels of a concrete failure that exposed this vulnerability. A Waymo robotaxi in San Antonio encountered standing water on a roadway and drove into the floodwater, eventually being swept downstream. The incident triggered a recall of nearly 3,800 Waymo vehicles across the United States. According to a U.S. Department of Transportation notice, the software "may allow the vehicle to slow and then drive into standing water on higher-speed roadways". This wasn't a sensor malfunction or a software crash; it was a decision-making failure rooted in how these AI systems are trained.

Why Do Modern Self-Driving Systems Fail at Common Sense?

The root cause lies in how autonomous driving AI is built. Most production self-driving systems rely on closed-loop, example-driven machine learning. In plain terms, engineers feed the AI thousands or millions of examples of driving scenarios, and the system learns patterns from those examples. This approach works remarkably well for frequent, predictable situations: merging onto highways, stopping at red lights, navigating intersections in normal weather.

But here's the problem: rare events and unusual conditions fall outside the training data. When a self-driving car encounters a scenario it has never seen before, or one that appears only in a tiny fraction of its training examples, the system struggles. Flooding, unusual road geometry, atypical behavior from other road users, or construction zones that don't match typical patterns all represent what researchers call "out-of-distribution events". The AI doesn't have enough examples to learn from, so it defaults to whatever pattern seems closest in its training data, often with poor results.

"One of the biggest gaps in autonomous driving AI today is common sense," said Igal Raichelgauz, chief executive of Autobrains.

Igal Raichelgauz, Chief Executive at Autobrains

Human drivers, by contrast, rely on intuition and reasoning. When you see standing water on a road, you don't need to have driven through that exact scenario before. You understand that water can hide hazards, that your vehicle might hydroplane, and that it's safer to avoid it. You apply general knowledge about how the world works. Current AI systems lack this kind of commonsense reasoning.

How Are Engineers Addressing This Safety Gap?

The industry is beginning to recognize that scale alone won't solve the problem. Simply collecting more driving data and training larger models helps, but it cannot cover every possible edge case. Instead, engineers and safety teams are implementing several complementary strategies:

  • System-Level Constraints: Rather than relying solely on the AI to make decisions, developers are adding explicit rules that govern behavior in high-risk conditions. For example, a rule that prevents the vehicle from entering standing water or uncertain terrain, regardless of what the AI perceives.
  • Simulation and Scenario Generation: Engineers are building more sophisticated simulations that include rare and dangerous scenarios, allowing the AI to "practice" handling them in a safe, controlled environment before deployment on real roads.
  • Conservative Fallback Behaviors: When the system encounters a situation it doesn't fully understand, it can default to conservative actions, such as slowing down, stopping, or alerting a human operator rather than proceeding with confidence.

Waymo's response to the flood incident demonstrates this approach. The company issued a software update to address the specific failure mode identified by regulators. However, the broader lesson is that autonomous driving safety requires more than just better perception or faster processors. It requires systems that can reason about uncertainty and prioritize caution when facing the unknown.

What Does This Mean for the Future of Robotaxis?

Waymo is currently testing its autonomous ride-hailing service in London with Jaguar Land Rover vehicles, and these trials include a human safety driver on board. This operational constraint reflects the industry's current reality: even the most advanced robotaxis are not yet ready for fully unsupervised operation in all conditions. The presence of a safety driver provides a fallback when the AI encounters a situation it cannot handle safely.

The gap between pattern recognition and commonsense reasoning is not a minor technical detail. It's a fundamental challenge that will shape how quickly autonomous vehicles can expand beyond controlled environments. Cities and transportation authorities are paying close attention to incidents like the Waymo flood case, and regulatory scrutiny is intensifying. For engineers and researchers, the priority is clear: the next generation of self-driving systems must move beyond learning from examples and develop some form of reasoning about the physical world and its hazards.

The autonomous driving industry has made remarkable progress in recent years, but the Waymo recall and Autobrains' warning signal that the hardest challenges may still lie ahead. The path to truly safe, widely deployed robotaxis requires solving not just the technical problem of perception and control, but the deeper problem of how to instill machines with the kind of practical wisdom that humans take for granted.