The AI Washing Crisis: Why Boards Are Now Personally Liable for Overstated AI Claims

Companies are facing a new governance crisis: regulators are treating false claims about AI capabilities as fraud, and board members can be held personally liable under "knew or should have known" standards. The Securities and Exchange Commission (SEC), Department of Justice (DOJ), and Federal Trade Commission (FTC) have all launched enforcement actions targeting AI misrepresentation, with the SEC's Cyber and Emerging Technologies Unit designating "AI washing" as an immediate enforcement priority.

What Exactly Is AI Washing, and Why Should Boards Care?

AI washing refers to false, misleading, or exaggerated claims about AI adoption, sophistication, or impact. Unlike traditional corporate fraud, AI washing exploits the opacity surrounding artificial intelligence systems. Because AI operates as what researchers call "invisible capital," lacking standardized measurement frameworks or quality benchmarks, companies face intense pressure from investors, customers, and competitors to demonstrate AI capabilities. The result is widespread misrepresentation.

The scale of enforcement is accelerating rapidly. In the last five years, more than 50 enforcement actions have targeted AI-related misstatements, and private shareholder class actions alleging AI-related misrepresentation have doubled year-over-year. The problem is not limited to a single jurisdiction. State-level AI legislation continues to proliferate, with 1,208 AI-related bills introduced across all 50 states, and 145 enacted into law in 2025 alone.

How Are Regulators Defining and Prosecuting AI Fraud?

Regulatory agencies are prosecuting AI washing under traditional fraud statutes while simultaneously building new enforcement frameworks. The SEC's 2026 Examination Priorities explicitly target AI-related disclosures, and the EU AI Act imposes mandatory transparency requirements with fines up to 35 million euros or 7 percent of global revenue, whichever is higher.

What makes AI washing particularly dangerous for board members is the personal liability exposure. Directors face liability under the "knew or should have known" standard, meaning they cannot claim ignorance about AI capabilities their company claims to possess. This is a fundamental shift from traditional compliance frameworks, where boards could delegate technical oversight to management. With AI washing, boards are now expected to actively verify management claims about AI systems.

Steps to Implement Verifiable AI Governance at the Board Level

  • Mandate Quantified AI Quality Metrics: Boards should require implementation of verifiable AI quality measurement systems, such as standardized frameworks that quantify AI maturity across governance, technical robustness, responsible AI, and strategic alignment dimensions. These metrics function similarly to Sarbanes-Oxley internal controls, providing boards with governance assurance mechanisms comparable to financial reporting standards.
  • Establish Executive Ownership Through a Chief Intellectual Property Officer: Organizations should designate a Chief Intellectual Property Officer (CIPO) or equivalent executive to integrate technical validation, legal disclosure requirements, and strategic value creation. This executive role serves as the central accountability mechanism for AI governance and prevents siloed compliance efforts.
  • Require Management Certification of AI Disclosures: Boards should mandate that management certify all AI-related claims and disclosures, similar to certification requirements under Sarbanes-Oxley. This creates a clear audit trail and personal accountability for executives making AI-related statements.
  • Report Verified AI Quality Scores in Public Disclosures: Organizations should integrate quantitative AI governance metrics into annual reports and Environmental, Social, and Governance (ESG) disclosures. This transforms AI governance from an internal compliance exercise into a competitive advantage with investors and insurers.
  • Conduct Independent Audits of AI Systems: Boards should require third-party audits of AI systems to verify management claims about capabilities, performance, and compliance. Independent verification provides the credibility necessary to defend against enforcement actions and shareholder litigation.

The urgency of this governance shift cannot be overstated. Intangible assets, including AI systems, algorithms, and data assets, now comprise approximately 92 percent of S&P 500 market value, a dramatic increase from just 68 percent in 1995. Yet this transformation has occurred without corresponding transparency mechanisms. Boards are being asked to oversee assets that represent the majority of corporate value while lacking standardized measurement frameworks.

"The AI washing crisis is not simply a legal compliance issue, it is a test of corporate credibility, governance maturity, and fiduciary responsibility," according to analysis from J.S. Held's office of the Chief Intellectual Property Officer.

James E. Malackowski, Chief Intellectual Property Officer at J.S. Held LLC

What Happens When Companies Get Caught Overstating AI Capabilities?

The consequences extend far beyond regulatory fines. When a company is found to have misrepresented AI capabilities, the enforcement action often triggers simultaneous violations across multiple regulatory frameworks. A security failure in an AI system can simultaneously constitute a breach of the EU AI Act, General Data Protection Regulation (GDPR), Health Insurance Portability and Accountability Act (HIPAA), or sector-specific regulations.

Directors and officers also face personal liability exposure. The "knew or should have known" standard means that board members cannot claim they were unaware of AI capabilities their company publicly claimed to possess. This creates a fiduciary duty to actively verify AI claims, not simply delegate oversight to management.

Insurance implications are equally significant. Directors and Officers (D&O) liability insurance underwriters are increasingly scrutinizing AI governance maturity as part of underwriting decisions. Companies with verified AI governance frameworks receive better insurance pricing and coverage terms, while those lacking governance documentation face higher premiums or coverage exclusions.

Why Traditional Compliance Programs Are Insufficient for AI Risk

Legacy disclosure controls and compliance programs were designed for traditional assets and business models. They are fundamentally inadequate for AI systems because AI systems are dynamic. Unlike software that behaves consistently once deployed, AI models learn, drift, and behave differently as data distributions shift over time. A model that is compliant on its release date may not remain compliant six months later.

This creates a continuous compliance obligation that traditional annual audit cycles cannot address. Organizations must implement ongoing monitoring systems that detect when AI model behavior deviates from validated baselines. They must maintain end-to-end audit trails of data inputs, model decisions, and output actions. They must establish defined processes for human review of AI decisions in contexts where outcomes significantly affect individuals.

The regulatory landscape is also fragmenting globally, making a single compliance strategy impossible. Europe's EU AI Act introduces a four-tier risk classification model with mandatory conformity assessments for high-risk systems. China's approach is fundamentally different, emphasizing data localization and algorithm filing with the Cyberspace Administration of China. Multinationals must build systems that are architecturally interoperable while respecting deep regional differences in data sovereignty and content control.

For boards and executives, the message is clear: AI governance is no longer optional, and treating it as a compliance checkbox is a liability risk. Boards that implement verifiable, quantified AI governance frameworks transform a regulatory burden into a competitive advantage with investors, insurers, and regulators. Those that fail to act face personal liability exposure, enforcement actions, and shareholder litigation.