Logo
FrontierNews.ai

The AI Pretending Problem: Why 1 in 6 Workers Fake Adoption (And What Actually Works)

The uncomfortable truth about enterprise AI adoption is that many workers aren't struggling to use new tools; they're actively pretending to use them while quietly reverting to old methods. According to research from GP Strategies, one in six workers admits to faking AI adoption entirely, performing compliance rather than genuine transformation. This phenomenon, described as "real-life LARPing" (live-action role-playing), reveals that the AI adoption crisis isn't primarily a training problem,it's a leadership and psychology problem.

The scale of this pretending extends far beyond frontline workers. Data from Pluralsight's 2025 AI Skills Report found that 91% of C-suite executives admit they've pretended to know more about AI than they actually do, and 79% of workers are doing the same. When organizations tie AI use to performance objectives and productivity bonuses without providing genuine support for behavior change, employees face enormous pressure to appear compliant rather than admit uncertainty or struggle.

Why Are Workers Faking AI Adoption?

The root cause isn't incompetence or laziness. Fear drives the dysfunction. Research from Irrational Labs found that 8% of people believe AI will replace them, 14% believe it will replace their peers, and 29% believe it will replace workers in other industries. This pattern reveals classic optimism bias: the further away the threat feels, the higher the perceived risk. The human brain is wired for homeostasis and self-protection, making it systematically underestimate the need to adapt unless absolutely necessary.

When AI adoption is treated as a training problem rather than a change initiative, people respond by performing compliance instead of embracing genuine transformation. One in three employees are actively pushing back, refusing to use AI tools or skipping training altogether. This isn't a skills gap; it's a signal that leadership hasn't created the conditions for people to feel safe experimenting with unfamiliar tools.

What Does Genuine AI Adoption Actually Require?

Behavior change requires three elements working together: capability (skills and knowledge), opportunity (conditions to apply them), and motivation (reason to care). Most organizations obsess over capability through training programs while ignoring the other two, which is why many AI rollouts stall before delivering value.

Self-Determination Theory offers a sharper lens for understanding why AI adoption either embeds or fades. Sustainable behavior change depends on three psychological needs being met: autonomy, competence, and relatedness. People don't adopt AI simply because they've been trained or told to; they adopt it when it makes them feel more in control of their work, more confident in delivering it, and more connected to how their peers are working.

How to Build Real AI Adoption: The BRAVE Model

  • Belonging: Create psychological safety by addressing fears head-on rather than pretending they don't exist. Position AI as augmentation rather than replacement, and mean it. Be transparent about what AI can and cannot do, and treat experimentation as something to be valued rather than punished.
  • Relevance: Connect AI adoption to work that actually matters to employees. Help teams understand how AI removes repetitive, meaningless tasks so they can focus on higher-impact work that requires human judgment and creativity.
  • Access: Ensure people have genuine access to tools, training, and support. This goes beyond providing software; it means creating time and space for people to experiment without fear of failure.
  • Visibility: Make AI usage visible and shared across teams. When people see peers successfully using AI, it normalizes adoption beyond early adopters and builds social proof that the tools actually work.
  • Empowerment: Involve people in shaping how AI transforms their work rather than having it done to them. When people are part of the design process, they own the outcome.

The contrast between two major companies illustrates this principle. When Klarna announced it was replacing its customer service team with AI bots, it looked like a bold leap into the future. Within a year, they were rehiring. They'd discovered that stripping out human judgment, context, and nuance left the system fragile and unreliable. Efficiency gains proved unsustainable without expertise in the loop.

IKEA took a completely different approach. They built BILLIE, an AI bot that now handles 47% of customer inquiries, but instead of laying off call center workers, they asked a different question: what could these talented people do now that they were freed from repetitive tasks? The answer was remote interior design. IKEA trained 8,000 of them, creating a brand new revenue channel that now accounts for 3.3% of total revenue, roughly 1.4 billion dollars.

"IKEA didn't try to do more with less. They did more with augmentation. While Klarna asked, 'How do we replace people?' IKEA asked, 'What else could our people do to have more impact?'"

GP Strategies Research

This principle reflects a psychological insight: friction creates ownership. When people build their own furniture, they value it more. The same principle applies to AI adoption. When people are involved in shaping how AI transforms their work, they own it. That ownership doesn't happen by accident; it requires getting close to the work itself, sitting with teams to understand what is painful or repetitive, and reimagining it together.

The Broader AI Readiness Challenge

Beyond the psychology of adoption, organizations face structural challenges in implementing AI at scale. PrivOS released a Business AI Scale Assessment designed to help companies identify where AI tools can be integrated based on their specific operational structure, industry, and workforce. The assessment is built on three research frameworks: the Anthropic Economic Index, U.S. BLS Employment Projections, and McKinsey's automate/augment/keep-human framework.

The assessment provides organizations with an AI Readiness Score, estimated cost savings by department, a workforce strategy for which tasks should be automated versus augmented versus kept human-led, and a 30, 90, and 365-day execution roadmap. According to PrivOS, organizations in retail, logistics, professional services, and manufacturing have used the assessment to identify workflow inefficiencies and develop phased adoption plans based on their operational structures.

A small food and beverage business owner in Turkey who completed the assessment noted that it provided a more structured approach to AI planning compared to generalized recommendations he'd previously encountered. The assessment helped his internal team understand where AI tools could support existing processes while maintaining human oversight in areas requiring industry expertise or customer interaction.

The key insight across all these approaches is consistent: AI adoption succeeds when organizations treat it as a human change initiative supported by technology, not a technology rollout that happens to involve people. When fear is driving the dysfunction, the framework needs to counteract fear directly and create the conditions for genuine behavior change. That requires leadership to think differently about how people actually adopt new ways of working.