The platform has defended its revamped content moderation strategy and its “mission to promote open conversation,” initiated following Elon Musk’s takeover in 2022. However, some long-standing users claim the changes have transformed the site into a hub for misinformation and hate speech.
According to X, the company has shifted from “a binary, absolutist take down/leave up moderation framework … to a more reasonable, proportionate, and effective moderation process.” Essentially, this approach prioritizes demoting problematic posts and accounts rather than outright banning them.
Despite these measures, the platform acknowledges a persistent “residual risk” of terrorist content, as “extremists” continue to adapt to circumvent moderation. Furthermore, disinformation remains a concern, particularly during election periods, as strategies to spread falsehoods “evolve continuously and rapidly.” The company also pointed to the challenges posed by generative AI in fostering misinformation.
Ellen Judson, a researcher at the nonprofit organization Global Witness, expressed astonishment at X’s direct admission of the platform’s high risk to democratic processes, even with current safety protocols. “They were aware of this risk ahead of a slew of elections across the EU. These revelations must be at the forefront of [the] European Commission’s ongoing investigation into X,” Judson commented.
The social media giant is already under scrutiny by the European Commission. In late 2023, the EU launched a groundbreaking investigation into whether X failed to properly address toxic content and misinformation. By July 2024, the company faced formal charges for breaching several key requirements of the Digital Services Act (DSA).
Leave a Reply