Decades in Business,
Technology and Digital Law

  1. Home
  2. Blog
  3. 🛡️ AI Governance: A Focus on FTC Actions Against AI...

🛡️ AI Governance: A Focus on FTC Actions Against AI Business Practices

by | Jul 6, 2025 | Blog

AI Governance and FTC Actions

As artificial intelligence continues to transform industries – from legal and financial services to e-commerce and creative production – companies face increasing pressure to move fast and differentiate in a competitive market. But as the technology evolves, so do the risks. The more powerful the AI, the more potent the consequences when it’s used – or marketed – irresponsibly.

That’s where AI governance comes in. Governance is not merely about compliance after the fact. It’s about designing systems, processes, and controls that anticipate and preempt legal and ethical risks. And in today’s environment, there is no bigger legal risk for consumer-facing AI companies than scrutiny from the Federal Trade Commission (FTC).

In recent months, the FTC has made clear that it intends to apply longstanding consumer protection laws to the claims companies make about AI. Misleading descriptions of how a product works, inflated performance statistics, deceptive earnings promises, and suppression of consumer reviews are all under the microscope. As the agency has emphasized: there is no AI exemption from the rules already on the books.

The Case That Got Everyone’s Attention

In early 2025, the FTC obtained a court order against Empire Holdings Group LLC – also known as Ecommerce Empire Builders – for selling what it called “AI-powered” e-commerce business opportunities. The company had been marketing expensive packages – some reaching $35,000 – that promised users a fully automated storefront generating thousands in passive monthly income.

The pitch was that AI would handle everything: product selection, customer engagement, even fulfillment. All the customer had to do was buy in. In reality, most consumers saw no return on their investment, and the “AI” component was either misrepresented or ineffective.

According to the FTC’s complaint, the company violated multiple consumer protection laws: it made deceptive earnings claims, failed to provide mandatory disclosures under the Business Opportunity Rule, and included terms that sought to suppress negative consumer reviews – triggering the Consumer Review Fairness Act. The court’s final order banned the company and its owner from promoting business opportunities and required them to surrender assets for restitution.

But the real message was broader: AI businesses that rely on hype instead of substantiated capabilities face real legal consequences.

The Expanding Scope of FTC Oversight

The Empire case was part of a coordinated enforcement initiative called Operation AI Comply, launched by the FTC in September 2024. The initiative reflects a clear policy shift: the Commission is not waiting for Congress to pass new AI laws. Instead, it’s applying traditional consumer protection principles to modern AI tools.

That means AI companies are now being evaluated under frameworks that were originally designed for ads, health claims, get-rich-quick schemes, and subscription traps. But those frameworks are proving flexible and powerful.

Recent FTC actions have targeted:

  • AI-based writing assistants that helped users generate fake product reviews
  • Legal-tech platforms that branded themselves as “robot lawyers” but provided incorrect or misleading guidance
  • Businesses that used “AI-powered” as a marketing term, without actual AI functionality to back it up

These cases underscore an important point: the presence of AI does not lower the legal threshold – it raises it. Because AI tools often amplify scale and automate decision-making, the potential for consumer harm is greater. So, too, is the regulatory scrutiny.

Where the Line Gets Crossed

So what does the FTC actually consider “deceptive” in the context of AI? Based on recent cases and public guidance, there are several patterns that stand out.

Overstatement

One is overstatement – marketing that exaggerates what AI can do without scientific validation. Another is misrepresentation – calling a system “AI-powered” when the functionality is limited or nonexistent. There’s also concern over omission – failing to disclose critical limitations or risks, especially when a tool is used in sensitive contexts like law, health, or finance.

Outcome

Perhaps most significantly, the FTC is focused on outcomes-based claims. If a product implies that users can expect certain results – like increased revenue, better performance, or improved decision-making – those claims must be supported by evidence. A customer testimonial is not enough. Anecdotes are not a substitute for data.

Facilitating Deception

The Commission has also warned companies against facilitating consumer deception. Even if a business does not engage in deceptive conduct directly, it may still be liable if its tools are widely used to generate false or misleading content – such as fake reviews or forged documents – without guardrails.

And finally, there is transparency. Companies that restrict consumer reviews, discourage public feedback, or quietly edit negative input may find themselves in violation of the Consumer Review Fairness Act – especially if those tactics are hidden in obscure terms of service.

Governance as Prevention

All of this reinforces why AI governance cannot be reactive. A governance framework should not be a PR document or a policy that sits in a drawer. It needs to be an operational discipline, integrated into product development, legal review, marketing, and customer experience.

At a minimum, that means having internal controls to:

  • Substantiate all AI-related claims before they are made public
  • Review and approve marketing materials with legal oversight
  • Disclose AI limitations clearly and prominently
  • Monitor how users interact with AI features, especially if misuse is foreseeable
  • Train customer-facing teams on what they can – and cannot – promise
  • Update policies and contracts to preserve consumer rights, including the right to leave honest feedback

Without these elements in place, companies run the risk of creating legal exposure that could have been avoided with modest investments in governance.

The Path Forward

The FTC’s recent activity does not signal a crackdown on AI innovation. Rather, it represents a call for responsible AI commercialization. Building and deploying advanced tools is not inherently problematic. But claiming that those tools can do more than they actually can, or turning a blind eye to how they’re misused, is.

For companies operating in this space, the choice is clear. Either build governance into your business model now – or risk having it imposed by regulators later. The Empire Holdings case is a reminder that the FTC is already paying close attention – and that consumer protection laws apply just as forcefully in the AI era as they did before it.

📩 Contact GalkinLaw to schedule a free initial consultation to discuss your AI legal and governance issues.

How Can GalkinLaw Help?

Fields marked with an * are required

"*" indicates required fields

This field is for validation purposes and should be left unchanged.
Would you like to schedule an initial consultation?
How do you prefer to be contacted?
This field is hidden when viewing the form
Disclaimer