Google Gemini's Password Generator Is Dangerously Predictable, Security Researchers Warn
Google Gemini and other popular AI chatbots generate passwords that appear secure at first glance but contain repeating patterns that make them significantly easier to crack than truly random passwords. When tested with a straightforward request to generate 20-character passwords using all character types, Gemini produced strings that followed an identical, human-readable sequence: letter, number, special character, letter, letter, number, special character, and so on.
Why Do AI Chatbots Fail at Password Generation?
The core problem lies in how large language models (LLMs), which are AI systems trained on vast amounts of text data to predict and generate human-like responses, approach randomness. Unlike password managers and security tools that use cryptographically secure pseudorandom number generators (CSPRNGs), algorithms specifically designed to produce unpredictable sequences, LLMs cannot successfully mimic true randomness. Instead, they generate patterns based on statistical patterns in their training data, resulting in passwords that follow recognizable structures.
When a researcher tested Gemini by requesting multiple password batches, the AI not only generated weak passwords but also provided reassuring explanations about their security alongside each one. Gemini calculated entropy metrics and assured users the passwords were secure, despite the fact that all five passwords in one batch shared the exact same character-type sequence. This false confidence is particularly concerning because it may lead users to trust passwords they believe are strong when they are actually vulnerable.
Is This Problem Limited to Google Gemini?
Researchers at Irregular, a cybersecurity firm, tested several AI chatbots and discovered that each one created passwords with clear, exploitable patterns. The issue is not unique to Gemini but rather a fundamental limitation of how LLMs function. Because these systems are designed to generate human-readable text, they struggle with the concept of true randomness, which is essential for password security. An attacker or malicious AI agent who recognizes the pattern can predict what character type comes next, dramatically reducing the computational effort required to crack the password through brute-force attacks.
How to Generate Secure Passwords Safely
- Use a dedicated password manager: These applications generate cryptographically secure passwords and store them encrypted, eliminating the need to memorize complex strings or rely on AI-generated alternatives.
- Create a custom random generator: Users can build their own password generator using spreadsheet applications like Excel or Google Sheets and store passwords offline on a hardware security key for added protection.
- Avoid AI chatbots for password creation: Do not use Gemini, ChatGPT, or other LLM-based tools to generate passwords, even if they provide reassuring explanations about security strength.
The most critical takeaway is that users should not accept false assurances from software about password security. When an AI tool generates a password and then explains why it is secure, that explanation may be misleading. Password managers, by contrast, use proven cryptographic algorithms that produce genuinely unpredictable sequences without requiring users to understand the technical details.
The gap between perceived security and actual security in AI-generated passwords highlights a broader challenge in AI development: systems that sound confident and provide plausible-sounding explanations are not necessarily reliable. For something as critical as account security, relying on purpose-built security tools rather than general-purpose AI chatbots remains the safest approach.