Why Artists Are Rejecting AI Guidance, Even When It Helps Them Understand the Tools
Artists using generative AI image tools face a surprising tension: learning resources that clarify how AI works can actually feel limiting to their creative process. A multi-stage research study involving 159 visual artists and hobbyists found that while simplified, structured guidance helps users understand how AI interprets prompts and generates images, many creatives still choose self-experimentation over formal instruction, fearing that guidance constrains their artistic freedom.
What's Driving the Disconnect Between Learning and Creative Freedom?
Researchers conducted interviews with 8 visual artists and hobbyists, followed by a survey of 159 creatives, and then a hands-on study with 17 participants to understand how artists approach tools like Midjourney and DALL-E. The core finding reveals a paradox at the heart of AI literacy for creative professionals: structured guidance can improve conceptual understanding, yet many creatives still prefer self-experimentation, describing it as essential to preserving creative autonomy during exploration.
The research identified two dominant learning approaches among creatives. Most rely on either self-experimentation or third-party online tutorials, such as YouTube walkthroughs and online courses, rather than official platform guidance. However, a significant barrier emerged: complex AI terminology and jargon made it difficult for non-expert users to follow existing tutorials and explanations.
When researchers created simplified, visual structured guidance that removed technical jargon, participants acknowledged it helped them understand how the AI system interprets inputs and decides what to generate next. Yet even with this improvement in clarity, many still preferred the freedom of self-experimentation, viewing guided learning as potentially limiting their ability to explore unique creative directions.
How Are Educators Helping Artists Navigate AI as a Creative Tool?
Filmmakers and educators working with generative AI are taking a different approach. Rather than presenting AI as a finished solution, they're framing it as a powerful but imperfect collaborator that requires human judgment and artistic vision. Bob Gosse, a filmmaker and professor at the University of North Carolina School of the Arts, described his own journey with AI tools like Midjourney and large language models starting in late 2022. He began experimenting with image generation in summer 2022, then shifted focus to testing how AI could assist with narrative problems and character development.
"It's very powerful, but it's also a Ferrari with a broken steering wheel. You can't control it. Creatively it doesn't have a point of view. It'll model the language that it has a point of view, but it's just a machine basically giving its best guesstimate as to what the next word is or what the image might look like," said Bob Gosse.
Bob Gosse, Professor of Filmmaking at University of North Carolina School of the Arts
Gosse emphasized that educators must help students understand AI's limitations while encouraging them to apply their own dreams, emotions, and artistic perspective to the tool's outputs. The challenge is teaching students to use generative AI as a means to express human creativity, not as a replacement for it.
The broader context matters here. Gosse noted that AI represents the most seismic technological disruption in the arts since the advent of film itself. Unlike previous disruptions, such as recorded sound in music or digital streaming, AI has generative properties and a form of agency that previous technologies lacked. This makes it fundamentally different from tools artists have adapted to in the past.
Ways Creatives Can Build Effective Mental Models of AI Tools
- Embrace Situated Learning: Seek in-context explanations and optional guidance that appears when you need it, rather than comprehensive upfront tutorials that may feel prescriptive or overwhelming.
- Balance Experimentation with Conceptual Understanding: Spend time both exploring freely and pausing to understand why certain prompts produce specific results, building intuition about how the system interprets language.
- Demand Simpler Explanations: Advocate for learning resources that use plain language and visual examples instead of technical jargon like "latent space" or "diffusion models," making AI concepts accessible without sacrificing accuracy.
- Recognize Tool Limitations: Understand that AI image generators lack lived experience and personal perspective, so your role as an artist is to inject meaning, emotion, and intentionality into the outputs.
- Adapt Strategies Across Versions: Be prepared to adjust your approach as AI models evolve, since techniques that work with one version of Midjourney or DALL-E may not transfer directly to the next iteration.
The research highlighted a critical insight for platform designers and educators: "more tutorials" is not the solution. Instead, learning support should be optional, adaptable, and respectful of creative agency, the creator's ability to shape both the creative process and final outcomes. Creatives' literacy needs are uneven, personal, and goal-driven, meaning a one-size-fits-all approach will inevitably fail.
This tension between guidance and autonomy reflects a deeper question about how humans and AI collaborate. Artists are not simply learning to use a tool; they're negotiating their role in a creative partnership where the machine has no artistic intent but can generate unexpected possibilities. The most successful creatives appear to be those who view AI as a sparring partner rather than an instructor, using self-experimentation to discover what the tool can do while maintaining their own creative vision as the guiding force.
As generative AI continues to evolve, the challenge for educators, platform designers, and artists themselves will be creating learning environments that foster conceptual understanding of how these systems work without diminishing the creative freedom that makes artistic work meaningful. The research suggests that the future of AI in the arts depends less on better tutorials and more on designing systems and support structures that respect artists' autonomy while helping them build accurate mental models of what AI can and cannot do.