
Artificial intelligence has become the corporate catchphrase of our time. It is now the lead story in investor calls, the centerpiece of product launches, and the magic dust sprinkled across marketing copy. But as with the “greenwashing” wave in ESG and “security washing” in cybersecurity, inflated or misleading claims about AI capabilities are quickly creating a new area of legal exposure. This phenomenon – now widely referred to as “AI washing” – is already attracting regulatory attention and will likely form the basis of litigation and enforcement actions in the near term.
Defining AI Washing
AI washing refers to the exaggeration, misrepresentation, or unsubstantiated description of the role, sophistication, or benefits of AI in a product or service. The legal question is not whether the product uses AI in some capacity, but whether the statements made about it are materially misleading to their intended audience – whether that audience is a regulator, an investor, a customer, or a contractual counterparty.
These misstatements can be deliberate, crafted to attract funding or market share, or negligent, arising from a lack of internal technical understanding or inadequate controls over public statements. The familiar patterns include branding a basic rules-based system as “machine learning” or “autonomous,” presenting best-case laboratory performance metrics as though they reflect real-world results, and making sweeping claims about eliminating bias or protecting privacy without disclosing data limitations or ongoing testing protocols.
Securities Law Implications
For public companies, the most immediate exposure lies in securities law, particularly under Section 10(b) and Rule 10b-5 of the Securities Exchange Act, which prohibit materially false or misleading statements made in connection with the purchase or sale of securities. AI capabilities are increasingly highlighted in 10-K risk factors, MD&A (Management’s Discussion & Analysis) sections, 8-K disclosures, and earnings call commentary.
If those capabilities are materially overstated or non-existent, the SEC may treat them as misrepresentations – especially where internal documents contradict public claims or where a market-moving AI announcement is unsupported by the underlying technology. The structure of such cases would echo recent ESG enforcement actions: bold public claims fuel investor enthusiasm and share price growth; reality fails to match the marketing; a corrective disclosure causes a market drop; and the company faces SEC investigation and shareholder class actions.
With the SEC explicitly identifying AI washing as an enforcement priority in 2024, issuers face heightened obligations to verify AI statements before making them public.
Consumer Protection and FTC Oversight
Consumer protection law provides another potent enforcement avenue. Section 5 of the FTC Act prohibits unfair or deceptive acts or practices, and the FTC’s 2023 “Keep Your AI Claims in Check” guidance squarely addresses AI washing. The agency warns against overstating capabilities, making unqualified claims of bias elimination, and presenting performance statistics without adequate context.
The FTC expects substantiation – competent and reliable evidence – before a claim is made. Enforcement history in analogous areas is telling: the FTC has brought actions against companies that misrepresented cybersecurity safeguards, such as encryption standards, even where no breach had yet occurred.
State Unfair and Deceptive Acts and Practices (UDAP) statutes present parallel risks. Many require only a showing that a reasonable consumer would have been misled, dispensing with the need to prove intent and often allowing for private rights of action with statutory damages. For companies marketing AI-based products directly to consumers, especially in sensitive areas like health, finance, or education, both federal and state regulators will take an unforgiving view of unsupported or overstated claims.
Contractual Liability Risks
Contractual liability is another underappreciated risk vector. Statements about AI capabilities made in RFP responses, sales pitches, or marketing collateral can find their way into agreements through express warranties or by incorporation of pre-contractual representations.
Under the UCC for goods and common law for services, such warranties are enforceable even without proof of intent to mislead. If the promised functionality fails to materialize, the counterparty can assert breach of warranty or misrepresentation, sometimes circumventing contractual limitations of liability if intentional misrepresentation or fraud is found.
This is especially acute in SaaS arrangements where AI functionality is central to the value proposition. Counsel must be vigilant in reviewing pre-contract statements and ensuring that integration clauses prevent unintended elevation of promotional language to contractual obligation.
Product Liability Theories
Product liability law offers yet another frame for AI washing claims. Misrepresentations about AI capabilities can be treated as design defects or failures to warn if they affect the safe use of a product.
Imagine a medical device marketed as “autonomous” that in reality requires significant human oversight. If a clinician relies on the claim of autonomy and a patient suffers harm, plaintiffs could argue that the misstatement itself constituted a defect. Courts have already accepted that misleading labeling or promotional statements in the pharmaceutical context can create actionable defects; there is little reason to think AI-based products will be treated differently, particularly in safety-critical domains.
Sector-Specific Regulatory Enforcement
Sector-specific regulation amplifies these risks. In financial services, healthcare, transportation, and defense, companies must make accurate statements about their systems in license applications, compliance certifications, and periodic regulatory filings. Misrepresenting AI capabilities in such contexts can result in civil fines, suspension or revocation of licenses, and in some cases criminal prosecution under 18 U.S.C. § 1001 for false statements to federal agencies.
A fintech lender claiming its AI underwriting tool has been “certified bias-free” when no such certification exists risks not only CFPB or state banking regulator action but also potential DOJ involvement. While AI-specific regulatory regimes are still evolving, existing obligations to ensure accuracy and truthfulness in filings already apply fully to AI claims.
Practical Steps
Claim Substantiation Process:
Establish a formal review process for all AI-related claims, involving legal, compliance, and technical leads.
Evidence Retention:
Maintain contemporaneous documentation – testing data, validation reports, expert reviews – that supports each claim.
Contract Safeguards:
Draft integration and limitation clauses to ensure marketing claims don’t become unqualified contractual warranties.
Training:
Educate marketing, sales, and investor relations teams on AI claim risk, drawing parallels to ESG and cybersecurity enforcement.
Regulatory Alignment:
Incorporate FTC, SEC, and sector-specific guidance into compliance programs now – before an investigation forces the issue.