In an effort to tackle online disinformation, the European Union (EU) is urging companies committed to its Code of Practice on Online Disinformation to label content generated by artificial intelligence (AI) clearly and promptly.
The EU's values and transparency commissioner, Vera Jourova, expressed the necessity of implementing technology that can recognise and label AI-generated content during a recent meeting with over 40 signatories to the Code. Jourova highlighted both the positive potential of new AI technologies, such as increased efficiency and creative expression, and the risks these present, including the potential spread of disinformation.
"When it comes to AI production, I don't see any right for the machines to have freedom of speech.
— European Commission (@EU_Commission) June 5, 2023
Signatories of the EU Code of Practice against disinformation should put in place technology to recognise AI content and clearly label it to users."
— Vice-President @VeraJourova pic.twitter.com/yLVp79bqEH
A range of AI technologies, including advanced chatbots like ChatGPT, image generators, and voice generating software, can create misleading content and visuals in a matter of seconds. In response to these new challenges, the commissioner asked signatories to establish a dedicated discussion within the code to address the issue.
The commissioner outlined two main discussion points for including AI-generated content mitigation measures in the Code. One pertains to services that incorporate generative AI, such as Microsoft’s New Bing or Google’s Bard AI-augmented search services, and their commitment to developing necessary safeguards against misuse. The second would require signatories capable of disseminating AI-generated disinformation to implement technology that recognises such content and labels it clearly for users.
Under the legally binding Digital Services Act (DSA), some provisions already require very large online platforms (VLOPs) to label manipulated audio and imagery. The goal to add labelling to the disinformation Code is intended to expedite this process, occurring even sooner than the August 25 compliance deadline for VLOPs under the DSA.
The Code now counts 44 signatories, including tech giants like Google, Facebook, and Microsoft, as well as smaller adtech entities and civil society organizations. However, in a surprising move, Twitter recently withdrew from the voluntary EU Code.
In addition to the issue of AI-generated disinformation, the meeting touched upon other pressing concerns, such as Russia's war propaganda, pro-Kremlin disinformation, the need for consistent moderation and fact-checking, efforts on election security, and the accessibility of data for researchers.