X Corp is taking a stand against allegations that instances of hate speech have proliferated across their platform since Elon Musk's takeover. In response to several reports published by the Centre for Countering Digital Hate (CCDH), X Corp has initiated legal action, arguing that the organization's conclusions are based on limited scope and lack a holistic understanding of the platform's performance.
The CCDH reports, based on data gathered since Musk's acquisition, claim a significant increase in slurs against Black and transgender people. The reports also argue that X Corp's 'Freedom of Speech, Not Reach' approach and lenient enforcement for Twitter Blue subscribers contribute to the rise of harmful content on the platform.
The core objective of X Corp's legal move appears to be to counter the impact of these highly publicized claims on their relationship with advertising partners. However, there are valid concerns about the CCDH's limited approach. Both X Corp and Meta, who also came under the CCDH's scrutiny, claim that the organization's limited sample of posts and examples do not provide a representative picture of their broader performance in mitigating hate speech.
These reports have garnered substantial media attention, likely influencing X Corp's brand reputation and advertising partnerships. X Corp aims to challenge this negative perception by taking CCDH to court. CCDH, in response, has staunchly defended its claims, terming Musk's legal threat as an intimidation tactic against those advocating against online hate speech.
CCDH also counters that X Corp, under Musk, has sought to limit third-party research by increasing the cost of API access, thereby inhibiting large-scale analysis of the platform's content. This situation has created an environment where the primary source of insight is the data that X Corp produces itself.
Despite the controversy, Musk and his team assert that instances of hate speech have significantly reduced under their stewardship. To support these claims, X Corp could release a comprehensive report detailing their enforcement actions and the methodologies behind their conclusions.
We remain committed to maintaining free speech on Twitter, while equally maintaining the health of our platform. Today, more than 99.99% of Tweet impressions are from healthy content, or content that does not violate our rules.
— Safety (@Safety) July 12, 2023
Read more about our progress on our enforcement…
However, the central point of contention remains the interpretation and definition of hate speech by X Corp and their assessment partner, Sprinklr. Sprinklr's model analyzes the context in which identified hate terms are used, reducing the tally of mentions classified as hate speech. According to their findings, 86% of X posts that included hate speech terms were not deemed harmful or intended to cause harm.
Given the ongoing dispute, the provision of transparent, comprehensive data from X Corp could be instrumental in dispelling the controversy. Currently, the court case seems to lend credence to CCDH's claims of intimidation rather than seeking genuine clarity on the issue. As the situation unfolds, it is critical for X Corp to take decisive action that fosters trust and clarity among its users and partners.