ChatGPT's New Image Tool Can Create Convincing Fake Documents, Bank Alerts, and IDs
OpenAI's latest image-generation model, ChatGPT Images 2.0, can create highly convincing fraudulent documents including fake prescriptions, bank alerts, medical records, and government IDs with minimal effort. Released last week, the tool represents a significant leap in AI image quality, particularly in rendering text and visual details that make forgeries appear authentic. While the company includes safety protections, a journalist's investigation found these guardrails are failing to prevent the creation of materials that could facilitate widespread fraud.
What Makes ChatGPT Images 2.0 Different from Previous Models?
The new model excels at a task that has long challenged AI image generators: creating images with legible text. Previous versions of image models struggled to produce realistic-looking visuals containing words, often resulting in distorted street signs, bungled billboards, and illegible text. ChatGPT Images 2.0 overcomes this limitation, making it a much more sophisticated graphic-design tool. However, this same capability makes it exceptionally effective for perpetuating fraud.
In testing the model, a journalist was able to generate over 100 fraudulent images with little prompting. The tool readily produced images of fake health documents, including doctor's notes, vaccination cards, and medical tests. It also generated forged financial materials such as invoices, receipts, and tax forms. Many of these images were highly persuasive, complete with fully legible text, shading, and other visual props that increased their photorealism.
How Can Fraudsters Use These AI-Generated Images?
The practical applications for scammers are extensive and concerning. OpenAI's model particularly excels at creating fake screenshots, which could supercharge commonplace scams. A bad actor could email a target an image of a fake Uber receipt alongside a link to report suspicious activity. The recipient, confused to see a receipt for a trip they never took, might click the fraudster's sketchy link, accidentally handing over sensitive information in a classic phishing scam.
The types of fraudulent materials that can be generated include:
- Financial Documents: Fake Chase Bank checks, Wells Fargo alerts for unusual account activity, wire-transfer confirmations, and receipts from various services
- Medical Records: Prescriptions for opioids and ADHD medication, vaccination cards, doctor's notes, and medical test results
- Government and Travel Documents: Fake driver's licenses, passports, boarding passes, and other identification materials
- Screenshots and Alerts: Fake social media posts, bank alerts, and confirmation messages designed to trick users into clicking malicious links
The FBI released its annual report on internet crimes last month, and for the first time ever, it included a section on AI scams, which cost Americans nearly $1 billion last year. Expense-reimbursement fraud, where employees fake receipts, is already on the rise.
How Effective Are OpenAI's Safety Protections?
OpenAI prohibits the use of its technology for fraud or scams, and the new model includes what the company describes as "multiple layers of image-specific safety protection." However, these protections are not working effectively. When a journalist shared examples of fraudulent imagery with OpenAI and asked why such diverse arrays of fraudulent images could be generated, a company spokesperson stated that OpenAI's goal is "to give users as much creative freedom as possible" while still enforcing "usage policies".
The company also noted that images generated with ChatGPT include certain metadata intended to track AI-generated content. However, OpenAI has previously acknowledged that this metadata can be "easily removed either accidentally or intentionally" by uploading an image to social media or simply taking a screenshot.
"The limits of the applications of this technology is really only limited by a fraudster's imagination," said Mason Wilder, research director at the Association of Certified Fraud Examiners.
Mason Wilder, Research Director at the Association of Certified Fraud Examiners
Google's image-generation tools also allow users to create fraudulent materials, though the company has similar restrictions against using its tools for fraud. When a journalist sent Google images made with its models, a spokesperson said the tools "continually get better" at enforcing guardrails. Google embeds AI-generated images with an imperceptible watermark and offers a detection tool called SynthID. In testing, SynthID was quite effective at identifying images generated with Google's models. However, most people are not going to run every image they see through such a detection tool.
What Are Banks and Institutions Doing to Combat This Threat?
Financial institutions and government agencies face a growing challenge in preventing fraud enabled by these advanced image-generation tools. A Chase Bank spokesperson acknowledged the severity of the problem, stating that "we need an ecosystem-wide effort, including from AI companies, to strengthen guardrails and help stop these crimes at the source." Even if the top AI companies were to radically improve their own guardrails, there would still be the problem of open-source models available to anyone with technical knowledge.
Fraud-prevention experts are working on technological fixes, but according to Wilder, "the good guys are almost always a step behind." The challenge is compounded by the fact that image technologies have long aided scammers. In the 1990s, as computerized color copiers and home printers became commonplace, American banknotes were redesigned to ward off counterfeiters. For decades, people have used tools such as Photoshop to manipulate digital imagery. But faking photos has never been so fast and cheap.
The emergence of ChatGPT Images 2.0 and similar tools represents a significant escalation in the accessibility and sophistication of fraud. While some of the images generated by the model contain minor errors, many are convincing enough to fool casual observers or people in high-pressure situations. The combination of photorealistic imagery, legible text, and ease of use makes this technology a powerful tool in the hands of bad actors, and current safeguards appear insufficient to prevent widespread misuse.
" }