n response to a recent AI-created attack ad, US politicians are taking action to demand more transparency in political advertising. New York Democrat House Representative Yvette Clarke has introduced the REAL Political Ads Act, a bill that would mandate clear disclosure of generative AI use in political ads through audio or text. As an amendment to the Federal Election Campaign Act, the proposed law would also task the Federal Election Commission (FEC) with creating regulations to enforce it. The measure is set to take effect on January 1st, 2024, regardless of whether the rules are in place by then.
The bill's primary aim is to combat misinformation. Clarke emphasizes the urgency of addressing this issue before the 2024 election, as generative AI has the potential to "manipulate and deceive people on a large scale." She asserts that unchecked use of AI could have a "devastating" impact on elections and national security, noting that current laws have not kept pace with technological advancements.
This proposed legislation follows the use of AI-generated visuals in a political ad by Republicans, which speculated on the potential outcomes of President Biden's second term. Although the ad features a faint disclaimer mentioning its use of AI imagery, there is concern that future advertisers might forgo disclaimers or present false information about past events.
Efforts to regulate AI are already underway, with California's Rep. Ted Lieu introducing a measure addressing broader AI usage and the National Telecoms and Information Administration (NTIA) seeking public input on potential AI accountability rules. Clarke's bill, however, is more targeted and designed to pass quickly.
While the bill's success remains uncertain, as it must pass a Republican-led House vote and a Senate equivalent before being sent to the President's desk, its passage could discourage politicians and action committees from using AI to deceive voters.