Skip to content

Proposed Bill Could Make Social Platforms Legally Liable for AI-Generated Content

A proposed bipartisan bill seeks to bypass Section 230 protections for social media companies regarding AI-generated content. If passed, this legislation could hold platforms accountable for harmful AI-generated material.

Senators Josh Hawley (Republican) and Richard Blumenthal (Democrat) have proposed legislation that could disrupt the growing use of AI elements in social apps. The bill aims to bypass Section 230 protections for social media platforms with regard to AI-generated content, meaning the platforms could be held legally accountable for harmful material created with AI tools.

Section 230 provides legal protection for social media companies against user-shared content on their platforms, on the premise that the platforms themselves aren't publishers or creators of user-provided information. This legislation has enabled social media companies to facilitate freer and more open speech. However, there have been increasing criticisms that this provision is outdated considering the selective amplification and distribution of user content by social platforms.

The proposed legislation could redefine Section 230 immunity, making it inapplicable to claims based on generative AI. This would ensure that consumers have the tools necessary to protect themselves from harmful AI-generated content, such as deepfakes, and that companies involved in this process can be held accountable.

However, the bill's current wording lacks clarity on what liability would mean in this context. For instance, if a user created an image using DALL-E or Midjourney and then shared it on Twitter, it remains ambiguous whether Twitter or the AI apps' creators would be held liable.

The implications of this bill could significantly influence the development of tools by social platforms. Platforms like Snapchat, TikTok, LinkedIn, Instagram, and Facebook that are currently experimenting with integrated generative AI options might need to reevaluate their plans based on the final scope of the legislation.

Although it's uncertain if the bill will be approved, considering the rapid evolution of generative AI tools, it underscores the growing concern among government and regulatory groups about the potential impact of generative AI. This could lead to more legal debates over AI regulation, copyright, and ownership related to AI content not covered by current laws. Balancing the need to update laws in response to evolving AI applications without stifling development will be a key challenge moving forward.