Dark AI Is Now a $2.1 Million Underground Market. Here's What Cybercriminals Are Actually Doing With It

Dark AI refers to artificial intelligence built or used for illegal activity, and it's becoming one of the fastest-growing threats in cybersecurity. Cybercriminals are using AI tools to generate convincing phishing emails, clone voices, create deepfakes, and automate attacks at a speed and scale that traditional defenses struggle to match. The dark web intelligence market is expected to reach $2.1 million by 2030, growing at a yearly rate of 21.8 percent, signaling rapidly increasing demand for tools that enable hidden online criminal activity.

The financial impact is already staggering. According to the 2025 Internet Crime Complaint Center (IC3) report, victims reported 22,364 complaints linked to AI-powered cybercrime, with losses exceeding $893 million. This includes employment fraud losses of nearly $13 million, investment scams exceeding $632 million, and romance scams totaling over $19 million. These numbers reveal how dark AI is enabling criminals to scale attacks faster and reach more people than ever before.

What Exactly Is Dark AI, and How Are Criminals Using It?

Dark AI isn't necessarily new technology. Instead, it's the weaponization of existing AI tools for malicious purposes. Some dark AI tools are custom-built from scratch, while others are malicious clones or modified versions of mainstream systems. Tools like FraudGPT, WormGPT, and DarkBERT are designed specifically to help cybercriminals bypass safety protections and generate harmful content at scale.

The ways criminals deploy dark AI are diverse and increasingly sophisticated. Here are the primary attack vectors:

  • Social Engineering: Dark AI can analyze your online activity and craft highly personalized messages that impersonate your bank, employer, or even friends, making attacks feel legitimate and urgent.
  • Voice Cloning: AI can replicate someone's voice using short audio clips, enabling scammers to conduct deepfake phone calls impersonating family members or executives requesting money or access codes.
  • Phishing Content Generation: AI can write realistic emails, fake login pages, and messages with near-perfect grammar, creating convincing fake websites that closely mimic trusted brands.
  • Malware Creation: AI helps attackers write harmful software faster, allowing even beginners to create viruses that steal passwords or spy on devices without deep technical knowledge.
  • Attack Automation: AI tools can scan thousands of devices simultaneously to find weaknesses, increasing the volume of attacks and reducing the time organizations have to respond.
  • Adversarial AI Attacks: Cybercriminals can trick security systems by slightly modifying files, images, or data so that AI detection tools fail to identify threats.
  • Biometric Bypass: Some attackers use AI-generated faces, voices, or fingerprints to fool identity verification systems used by banking and mobile apps.

A particularly concerning development is the emergence of agentic AI tools, which can act independently. Rather than simply writing a scam email, these systems can plan an attack, create malware, send messages, and adjust tactics automatically without human intervention. Researchers from Google DeepMind discovered that attackers can create AI agent traps, which are hidden instructions embedded on websites that manipulate AI tools into leaking sensitive data or performing harmful actions.

How Are Real-World Dark AI Tools Actually Operating?

Several notorious dark AI tools have emerged on the criminal underground, each designed to lower the barrier to entry for cybercriminals. FraudGPT is a DarkGPT-style tool specifically designed to help scammers create convincing phishing emails, fake websites, and social media scams. It can generate messages that sound natural and urgent, making them significantly harder to detect than generic phishing attempts.

WormGPT removes the safety protections found in mainstream AI systems, allowing attackers to generate harmful code or scam messages with ease. Even someone with minimal technical skill can use WormGPT to create convincing attacks that appear polished and trustworthy. PoisonGPT demonstrates how attackers can secretly modify AI models to spread false or misleading information, potentially influencing what users see online through AI chatbots with built-in web browsers.

DarkBERT is trained on data from the dark web, giving it insight into how cybercriminals communicate and operate. While researchers developed it to study threats, similar models could help criminals refine scams or identify targets more effectively by learning from hidden online communities.

Importantly, not all dark AI comes from custom-built tools. Many attackers misuse everyday tools like ChatGPT and Google Gemini by combining them with other software or using workarounds to bypass safety limits. Scammers can use these mainstream tools to draft realistic phishing emails, create fake job offers, or generate scripts for scam calls. When paired with voice cloning or image manipulation tools, this creates convincing scams such as deepfake romance schemes or impersonation attacks.

Steps to Protect Yourself From Dark AI Threats

  • Verify Requests Through Secondary Channels: If you receive an urgent request for money or sensitive information from someone claiming to be a family member, colleague, or executive, hang up and call them back using a phone number you know is legitimate. Voice cloning is convincing, but this simple step defeats it.
  • Be Skeptical of Unsolicited Communications: Legitimate organizations rarely ask for passwords, access codes, or financial information via email, text, or phone. If something feels off, contact the organization directly using a verified phone number or website.
  • Use Multi-Factor Authentication: Enable multi-factor authentication (MFA) on all critical accounts, including email, banking, and work systems. This adds a layer of protection even if your password is compromised through phishing or malware.
  • Stay Informed About Current Attack Tactics: Cybercriminals continuously evolve their methods. Regularly update your knowledge about current threats through reputable security sources, and share this information with family and colleagues.
  • Report Suspicious Activity: If you encounter a suspected dark AI scam, report it to the IC3 (Internet Crime Complaint Center) or your local law enforcement. These reports help authorities track emerging threats and protect others.

Why Organizations Need to Rethink Their Defense Strategy

The rise of dark AI is forcing organizations to move beyond traditional, reactive cybersecurity approaches. Penetration testing, the practice of simulating attacks to identify vulnerabilities, is evolving to account for AI-powered threats. According to industry analysts, penetration testing will increasingly shift toward AI-driven continuous security validation by 2026, where machine learning algorithms orchestrate real-time attack simulations across hybrid environments.

However, AI-powered tools alone cannot replace human expertise. While AI excels at pattern recognition and processing vast datasets, it lacks the adversarial mindset that defines effective offensive security. A skilled human tester can understand the social engineering implications of a poorly segmented network, recognize how attackers might exploit physical access, and think creatively about attack chains in ways that current AI models cannot.

The most effective defense strategy combines AI-driven automation with human judgment. AI-powered platforms can scan vast attack surfaces, correlate vulnerabilities, and even chain exploits together at speeds no human team could match. But human testers investigate findings, innovate new attack approaches, and understand the business context that determines which vulnerabilities pose the greatest real-world risk.

For organizations, this means demanding continuous, not periodic, testing. AI enables security validation to move from annual or quarterly exercises to continuous assessment. Additionally, critical assets deserve the attention of experienced human testers who can think beyond the algorithm, and security teams should evaluate their providers' AI maturity to ensure they're using AI as a force multiplier, not a shortcut.

The emergence of dark AI has fundamentally altered the cybersecurity landscape. Criminals now have access to the same powerful tools that defenders use, and they're deploying them at scale. Understanding how dark AI works, recognizing the threats it enables, and implementing both technical and behavioral defenses are no longer optional. They're essential to protecting yourself, your organization, and your data in an era where the line between human and machine-generated attacks has become increasingly blurred.