Twitter is innovating to help users discern potential "misleading media," particularly with the rise of AI-generated images and video content. The company is currently testing Community Notes for media, a feature designed to implement crowd-sourced fact checks on specific photos and videos.
This new addition enables highly-rated Community Notes contributors to append notes to images contained in tweets, functioning similar to notes on tweets. These labels can provide extra "context" to images, such as indicating if a photo is AI-generated or manipulated in any way.
From AI-generated images to manipulated videos, it’s common to come across misleading media. Today we’re piloting a feature that puts a superpower into contributors’ hands: Notes on Media
— Community Notes (@CommunityNotes) May 30, 2023
Notes attached to an image will automatically appear on recent & future matching images. pic.twitter.com/89mxYU2Kir
Furthermore, the feature aims to curb the viral spread of such images. Twitter's intention is for these notes to automatically appear on "recent and future" versions of the same image, even if different users share them in new tweets. Still, Twitter admits the need for refining its image matching, currently designed to prioritize precision, which may not always identify all matching images.
It's important to note that the Community Notes system isn't foolproof. Despite leading to nuanced fact checks and debunking false claims, contributors have pointed out that the feature "is not impervious to errors or perpetuating common misconceptions."
At this stage, Twitter is experimenting with notes for media on tweets containing a single image. However, they plan to extend this feature to tweets with multiple images and videos. Twitter isn't alone in grappling with AI's rise and misinformation spread. Google also recently launched features to trace an image's history in search, assisting users in identifying doctored photos.