Skip to content

Meta to Mandate Disclosure for AI-Altered Political Ads

Meta's campaign against deepfake political ads: New rules demand AI-altered ads' disclosure. Boosting transparency in election and social issue advertising.

Meta's Bold Move: AI-Altered Political Ads to Face Stricter Rules in 2024

Meta, the parent company of Facebook and Instagram, is gearing up for a significant shift in how political ads are handled on its platforms. In 2024, it will introduce stringent rules aimed at tackling the issue of deepfakes, which are digitally manipulated media designed to mislead viewers.

Under these new rules, advertisers running campaigns related to elections, politics, or social issues will be required to disclose when their ads feature content that has been "digitally created or altered" through the use of artificial intelligence (AI). This move is a significant step towards enhancing transparency and reducing the spread of misleading content in political advertising.

The rules mandate that advertisers make these disclosures when their ads contain photorealistic images, videos, or realistic-sounding audio. This applies to a range of categories, including:

  1. Deepfake Content: Ads featuring individuals doing or saying things they never actually did or said, even if the content is digitally created or altered.
  2. Fictitious Elements: Ads showcasing photorealistic individuals who do not exist, or fabricated events that appear convincingly realistic, including manipulated imagery from real-life events.
  3. Misleading Events: Ads depicting "realistic events" that are not authentic recordings of the events.

Meta is clear that common and inconsequential digital alterations, such as image sharpening and cropping, will not be subject to these new disclosure requirements. The focus is on content that could mislead viewers by appearing genuine when it is not.

All the information about ads that have been digitally altered will be made available in Meta's Ad Library, a searchable database that collects paid ads on the company's platforms. This move is part of Meta's broader efforts to combat deceptive AI and deepfakes in political advertising.

Nick Clegg, President of Global Affairs at Meta, emphasized that "advertisers running these ads do not need to disclose when content is digitally created or altered in ways that are inconsequential or immaterial to the claim, assertion, or issue raised in the ad."

These new rules come in the wake of Meta's decision to restrict the use of its generative AI tools in advertising, particularly for political and social issue campaigns. These tools, which enable advertisers to create multiple versions of creative assets and make adjustments to images, will not be available for potentially sensitive topics across various industries.

While Meta is taking active steps to regulate the use of AI in political and social issue ads, some platforms, like TikTok, have chosen to steer clear of political advertising altogether, opting to ban all paid political content in brand ads and paid branded content. These moves are a response to the growing concern over the potential impact of deepfakes on political discourse and public perception. The battle against misleading political content is heating up, and Meta is determined to be at the forefront of this critical effort.

Latest