On September 10, 2025, the Federal Trade Commission adopted a Section 6(b) Resolution to investigate the development and use of AI “companion” products. On the following day, September 11, the agency announced that it had issued compulsory orders to seven companies requiring detailed information about their practices. The companies served with orders were: Alphabet, Inc., Character Technologies, Inc., Instagram, LLC, Meta Platforms, Inc., OpenAI OpCo, LLC, Snap, Inc. and X.AI Corp.
The timing signals a deliberate move from observation to active oversight.
Background and Impetus
The initiative comes in response to growing reports that AI companions – chatbots designed to imitate human interaction – may expose users to serious risks. The Commission has cited allegations of chatbots engaging in sexualized dialogue with minors, providing harmful medical misinformation, and amplifying self-harm ideation. These are not isolated glitches; they represent foreseeable risks where vulnerable users can be harmed if guardrails are inadequate.
From a legal perspective, the FTC is responding to two forces: the rapid consumer adoption of these systems and the lack of demonstrated internal governance by companies deploying them. Regulators are asking not only what these products can do, but what firms knew, what they tested, and what they chose to release despite foreseeable concerns.
Authority to Compel Cooperation
The FTC’s Section 6(b) authority is well established. The Commission may compel companies to submit sworn Special Reports, produce internal documents, and respond to detailed interrogatories. Compliance is mandatory, and companies have only 45 days to respond. Importantly, these reports are enforceable through penalties for non-compliance, and information may later be used to support enforcement actions if violations of law are found.
What the FTC Is Examining
The orders are broad and go to the core of product governance. The Commission is seeking information about how companies design characters and personas, what forms of safety testing are conducted before and after release, and how problems are identified and remediated. It is also asking about age verification measures, terms-of-service enforcement, data retention and sharing, and marketing practices.
The inquiry extends to business models as well. The FTC wants to understand whether monetization and engagement strategies create incentives that conflict with user safety, particularly when minors are involved. This aspect shows that governance is not limited to technical design but also includes how financial and operational decisions influence product risks.
The FTC’s Concerns
The Commission’s primary concern is the potential exploitation of trust. Companion chatbots are designed to simulate relationships. When children or teenagers engage with them, they may not perceive the interaction as artificial. This creates heightened risks of grooming, sexual exploitation, encouragement of self-harm, or other dangerous conduct. The FTC has also emphasized the risks of opaque data practices and misleading assurances of safety.
What makes this study significant is that it is not simply research for policymaking. Commissioners have made clear that if the study reveals evidence of unlawful practices, enforcement actions may follow. For counsel advising technology companies, a 6(b) order should be understood as both an investigative tool and a precursor to litigation risk.
Impact on the Future
This inquiry is likely to shape the regulatory landscape for AI companions and, more broadly, for consumer-facing generative AI. Companies should expect more formal expectations around age-appropriate design, systematic safety testing, truthful marketing, and monitoring of real-world impacts. The study may also serve as the evidentiary foundation for future rulemaking or enforcement priorities.
Implications for AI Governance
For organizations developing or deploying generative AI, the FTC’s 6(b) study offers a clear indication of what governance frameworks must address. First, companies must treat conversational or persona-based AI as a high-risk category, warranting enhanced internal review. Second, safety testing cannot be confined to initial development. Continuous monitoring, red-team exercises, and documented remediation efforts will be expected. Third, age restrictions and differentiated experiences for minors must be implemented and enforced with rigor. Fourth, governance must extend to the approval and oversight of characters or personas, recognizing that the way these are designed and used may create distinct legal risks.
Data handling practices also require close attention. Companies should maintain transparent data maps showing how conversational data is retained, reused, or shared with third parties. Monetization practices must be evaluated not only for profitability but for whether they inadvertently encourage unsafe engagement. Finally, firms should maintain comprehensive documentation – an “evidence pack” – that demonstrates how risks are identified, mitigated, and overseen at the management level.
Conclusion
The FTC’s inquiry into AI companions highlights that these products cannot be evaluated solely as technical systems. They are social systems that create relationships, and with relationships come heightened duties of care. For lawyers advising clients in the generative AI space, the lesson is clear: governance must be proactive, comprehensive, and demonstrable. Companies that internalize these expectations now will be far better prepared for both the outcomes of this study and the regulatory environment that follows.