The Federal Trade Commission (FTC) is reportedly in the early stages of investigating OpenAI over allegations that its flagship conversational AI, ChatGPT, might have made "false, misleading, disparaging, or harmful" statements about individuals. While an immediate crackdown seems unlikely, it indicates the FTC's increasing scrutiny of the AI industry.
The Washington Post first reported the news, referencing a 20-page letter to OpenAI asking for information on disparagement complaints. The FTC, maintaining the confidentiality of its investigations, declined to comment.
Earlier this year, the FTC launched a new Office of Technology to combat tech sector "snake oil" and cautioned companies about making unfounded AI claims. The regulator emphasized that AI claims are subject to the same truth requirements as any other industry.
Although this isn't the first time the agency has tackled AI issues, it appears that OpenAI, currently a global leader in the field, will need to provide justification for its practices.
Such investigations typically arise from formal complaints or lawsuits implying regulatory non-compliance. For instance, an Australian mayor filed a lawsuit against OpenAI, claiming that ChatGPT falsely accused him of bribery and imprisonment. This sort of potentially defamatory content could constitute "reputational damage" as indicated in the FTC's letter to OpenAI.
Technically, it's complex to determine whether this falls under the categories of publishing, speech, or private communication, but the company might need to explain its position. With several victories against tech companies for privacy issues and AI-related violations, the FTC's move might indicate a broader regulatory scrutiny of the AI industry.