In a controversial move, YouTube announced a change to its misinformation policy, stating that it will now permit content that falsely alleges fraud in the 2020 US Presidential Election. YouTube claims it “carefully deliberated this change” but has not offered detailed reasons for the decision. The video-sharing platform had previously prohibited such content in December 2020.
YouTube suggested that the shifting landscape had prompted a reassessment of the policy’s impact. While acknowledging that removal of such content curbed some misinformation, the company also raised concerns about unintentionally stifling political speech without substantially reducing the risk of violence or other real-world harm. Given the commencement of campaigns for the 2024 elections, YouTube stated it would cease removal of content that propagates false claims of widespread fraud, errors, or systemic glitches in the 2020 and other past US Presidential elections.
Critics of this change have pointed out the societal harm that can result from misinformation and disinformation, as they can distort perceptions of reality and contribute to the rise of authoritarian movements. Critics are also worried that false claims about election integrity can facilitate the enactment of laws restricting voting access, effectively promoting voter suppression under the guise of “election security”.
The policy shift coincides with continued circulation of false claims about the 2020 election results by figures including Donald Trump, the front-runner for the 2024 Republican nomination. If YouTube possesses data suggesting that spreading election denialism is not harmful, critics argue that the company should make that information public. Absent such evidence, skepticism about the motivations behind YouTube’s “careful deliberation” is likely to persist.