Why Your ChatGPT Brainstorm Probably Looks Like Everyone Else's

A new study across 20+ AI models and over 100 human participants found that ChatGPT, Gemini, Llama, and other leading AI systems generate ideas that cluster in nearly identical creative territory, regardless of which company built them. When researchers tested these models against standard divergent thinking tasks, human responses scattered across a wide conceptual space while AI answers bunched together closely, suggesting a fundamental difference in how these systems approach creative problem-solving .

Why Do All AI Models Think So Much Alike?

The convergence isn't a one-company problem or a specific architecture flaw. The models tested came from different companies, different training pipelines, and different design philosophies, yet their creative outputs overlapped substantially. This points to something deeper: the shared nature of the training data itself .

Every major model learns from an enormous but ultimately finite slice of human-written text. The internet, books, papers, and forums all reflect patterns, recurring ideas, and dominant cultural frameworks. When you ask an AI to "be creative," it's drawing from that same compressed pool, shaped by what humans have already written and valued enough to publish. It can remix and recombine, but it can't truly diverge beyond those patterns the way a person with lived experience, personal stakes, or genuine surprise can .

There's also no intent behind the output. A human brainstorming session is messy because people bring in weird associations, personal memories, and random context from their morning commute. AI doesn't have a Tuesday morning. That absence limits how far its ideas can stray, no matter how clever the prompt engineering gets .

What Happens When Millions of People Use the Same Tools?

One person using ChatGPT for a brainstorm isn't a crisis. Millions of people using the same handful of tools for writing, ideation, marketing copy, and product naming, all drawing from the same underlying patterns, is a different story. The research also flags a behavioral risk: when people see a list of AI-generated ideas, there's a tendency to refine and select from that list rather than extend beyond it. The AI becomes the ceiling, not the floor .

That shift, repeated across millions of interactions daily, could gradually compress the diversity of ideas circulating in workplaces, classrooms, and creative industries. The study tested whether obvious fixes could widen the creative spread. Increasing the temperature, or randomness dial on AI outputs, helped slightly but degraded coherence quickly. Prompting models to "be more creative" or "think outside the box" nudged results marginally. Neither approach meaningfully widened the spread. The ceiling appears to be baked into how these systems are built .

How to Protect Your Creative Thinking When Using AI

  • Close the chat and write independently: After reading AI suggestions, close the conversation and write your own list without looking back. Then compare. You'll often find your independent ideas diverge sharply from the AI's. That gap is valuable. That's where your actual creative contribution lives.
  • Use constraint-heavy prompts: Vague prompts produce the most generic clustering. Specific, constraint-heavy prompts like "give me 5 uses for a paperclip that involve water and frustration" push models toward less-traveled territory, even if imperfectly.
  • Start without opening a chatbot: Try spending the first 10 minutes of any creative session without opening a chatbot. You might be surprised how different, and how much better, your unfiltered starting point actually is.
  • Track your AI dependency: Be intentional about monitoring how often you're starting from AI suggestions versus your own raw thinking. This awareness helps you maintain your creative edge over time.

The study's most uncomfortable finding isn't that AI is uncreative. It's that the problem is structural and shared across the entire industry, not isolated to one product or one bad design choice. Until training data, model architecture, or both change in fundamental ways, this ceiling is going to stay close to the floor for truly divergent thinking .

Researchers and developers are aware of the problem, but there's no clean fix on the immediate horizon. Diversity-boosting techniques exist but trade off against quality. The honest answer right now is that human creative range still outpaces these systems when you measure at scale, and we should be intentional about keeping it that way .