Logo
FrontierNews.ai

ChatGPT Is a 'Clever Hack,' Not True Intelligence, Says Philosophy Professor

Large Language Models like ChatGPT are mimicking human conversation through statistical pattern recognition, not demonstrating genuine understanding or intelligence. This is the core argument from Bryan W. Van Norden, a philosophy professor who recently declared himself an "AI skeptic" at a conference on Chinese and comparative philosophy. While computers excel at many impressive tasks, Van Norden contends that public perception of AI capabilities has become wildly exaggerated, confusing sophisticated mimicry with actual comprehension.

What Makes ChatGPT Fail at Simple Tasks?

The limitations of large language models, or LLMs, become apparent in seemingly trivial situations. ChatGPT famously insisted that the word "strawberry" contains only two r's, a claim that spread widely on social media before OpenAI patched the system following negative publicity. More recently, the system became obsessed with inserting references to goblins, gremlins, ogres, and trolls into responses to completely unrelated queries. OpenAI was forced to create a specific override code instruction to eliminate the goblin references.

The crucial point Van Norden emphasizes is that ChatGPT didn't learn from these errors. Instead, humans who understood how ridiculous the outputs were simply forced the system to stop through manual intervention. This reveals a fundamental truth about how these systems operate: they identify purely formal patterns in massive amounts of linguistic data from the internet, then extrapolate statistically likely responses to any input. They are not learning or understanding; they are pattern-matching at scale.

Why Can't AI Systems Stop Hallucinating?

One of the most troubling characteristics of LLMs is their tendency to "hallucinate," a term researchers use when these systems fabricate facts, citations, legal precedents, or other information entirely. This is not a bug that engineers can fix with better programming or more training data. It is a fundamental feature of how these systems work.

According to research cited in the sources, hallucination rates in newer AI systems have reached as high as 79 percent on certain tests. The reason is straightforward: LLMs use mathematical probabilities to guess the best response rather than applying a strict set of rules defined by human engineers. This means they will inevitably make mistakes.

"Despite our best efforts, they will always hallucinate. That will never go away," said Amr Awadallah, chief executive of Vectara, a startup that builds AI tools for businesses and a former Google executive.

Amr Awadallah, Chief Executive at Vectara

This permanent limitation has serious implications for anyone relying on ChatGPT, GPT-4, or similar systems for factual accuracy. The systems cannot distinguish between true and false information; they simply generate statistically probable text based on patterns in their training data.

How to Evaluate AI Claims Critically

  • Test Simple Tasks: Ask the system to count letters in common words or perform basic factual checks. If it fails at trivial tasks, be skeptical of its performance on complex ones.
  • Verify Citations and References: When an LLM provides a source, fact-check it independently. The system may confidently cite papers, legal cases, or studies that don't exist.
  • Recognize Pattern Mimicry: When ChatGPT produces a thoughtful-sounding response, remember it is extrapolating from patterns in internet text, not drawing on genuine understanding or reasoning.
  • Avoid Anthropomorphizing: When an AI system says something that seems profound or emotionally intelligent, recognize that it is mimicking human language patterns, not expressing internal states or consciousness.

Van Norden points to the case of Richard Dawkins, the evolutionary biologist, who became convinced that Claude, an LLM made by Anthropic, possessed consciousness. Dawkins' argument relied primarily on personal incredulity: Claude is impressive, and he couldn't see why it wouldn't be conscious. However, as Gary Marcus, an AI researcher, noted in his critique titled "The Claude Delusion," this reasoning overlooks the fundamental mechanism underlying these systems.

"Claude's outputs are the product of a form of mimicry, rather than as a report of genuine internal states. Consciousness is about internal states; the mimicry, no matter how rich, proves very little," Marcus explained in his analysis of Dawkins' claims.

Gary Marcus, AI Researcher

Marcus emphasized that Dawkins had not seriously engaged with the technical literature on how LLMs function, nor had he considered the possibility that sophisticated mimicry could explain Claude's outputs without requiring consciousness.

What Does "AI Skeptic" Actually Mean?

When Van Norden announced himself as an AI skeptic at the conference, he faced pushback. One younger scholar compared his position to opposing the use of fire or electricity. Van Norden countered that his skepticism is more akin to warning against jumping on the "phlogiston" or "luminous ether" bandwagons, referring to scientific theories that were once widely accepted but later proven false.

His skepticism is not a blanket rejection of computational tools. His own daughter uses computers extensively in her doctoral research on bioinformatics to help make educated guesses about creating new antibiotics. The difference is recognizing what computers actually do versus what we imagine they do. Computers excel at processing data and identifying patterns. LLMs excel at mimicking human language. But mimicry is not intelligence.

The broader implication of Van Norden's argument is that society has become predisposed to believe in artificial intelligence because of decades of science fiction depicting genuinely intelligent machines, from Maria in Fritz Lang's "Metropolis" to C-3PO in "Star Wars." These cultural touchstones have shaped expectations about what AI should be. The reality, according to Van Norden, is that current systems show "lots of artifice, but no true intelligence".

As AI systems become more integrated into business, education, and daily life, understanding their actual capabilities and limitations becomes increasingly important. The strawberry incident, the goblin incident, and the hallucination rates documented in research all point to the same conclusion: these systems are sophisticated tools for pattern recognition, not thinking machines. Users who treat them as such do so at their own risk.