Last winter, social media was inundated with disturbing videos claiming that radical Islamists were “invading” Christmas markets across Europe. A video purportedly showed people “disrupting” the Brussels Christmas market opening, while another image depicted a market under heavy security, suggesting that Christian traditions were at risk. However, these videos were from peaceful demonstrations, and the photo was AI-generated. What seemed credible was misleading or completely fake.
According to a European Commission survey, nearly two-thirds of respondents reported encountering disinformation or fake news recently. With AI tools now capable of creating highly realistic images, videos, and text, distinguishing reality from fiction has become increasingly challenging.
To combat this, a multinational team of researchers and media experts, funded by the EU, decided to use AI against AI. In 2020, a group of professionals from universities, media companies, and tech firms launched AI4Media, a four-year EU-funded project, to develop AI tools for journalists and fact-checkers to verify digital content quickly and reliably. “There is an urgent need to develop AI techniques for the media sector,” said Yiannis Kompatsiaris from CERTH, who coordinated the initiative.
AI has made fabricating convincing fake content easier, allowing anyone with access to generative AI to create false images, cloned voices, or realistic news articles. Social media platforms quickly amplify such content. “When a fake story is supported by realistic images, it’s much easier to believe and share,” stated Kompatsiaris.
AI4Media developed verification tools to integrate into newsroom workflows, tested by media organizations like Deutsche Welle in Germany and VRT in Belgium. “Fact-checkers and journalists face suspicious images every day,” said Akis Papadopoulos from CERTH, describing the technology as a “first line of defence” to quickly flag potentially manipulated content.
According to the European Digital Media Observatory, AI-generated disinformation has increased steadily. Coordinated campaigns can influence elections, distort public debate, and undermine trust in institutions.
Identifying manipulated content is only part of the challenge. Understanding how disinformation spreads—who amplifies it, how narratives evolve, and whether campaigns are coordinated—is equally important. “We are in a continuous loop of trying to understand and catch up with the latest technology,” stated Riccardo Gallotti from FBK.
FBK, a research center in Italy, partnered with AI4Trust in a parallel EU-funded project to analyze the dynamics of online disinformation. AI4Media focused on detecting manipulated media, while AI4Trust built a hybrid human-machine system to monitor and analyze disinformation on a large scale. The platform tracks multiple social media and news sites, using advanced AI algorithms to process multilingual and multimodal content.
Because online material volume exceeds human capacity, the system filters and flags posts with a high risk of being fake. Professional fact-checkers review the material, and their assessments feed back into the system to enhance performance.
The projects complement each other: one detects manipulated content, and the other examines its spread. Together, they offer both detailed and broad perspectives needed to combat AI-powered disinformation.
Using AI to detect AI is likened to an arms race. “It is indeed funny, but it’s like an arms race,” Kompatsiaris said. Generative AI models are advancing rapidly, and as they grow more powerful, detection systems must adapt continuously.
The team automated parts of the verification process and routinely retrained their systems, though staying ahead requires ongoing investment in research and the media sector reliant on these technologies.
However, technology alone is insufficient. “We need tools, but we also need policies and rules,” Kompatsiaris said. The EU’s Digital Services Act obligates large online platforms to assess and mitigate systemic risks, including disinformation, and increase transparency. The Artificial Intelligence Act mandates transparency for certain AI systems, including labeling AI-generated content. A draft Code of Practice aims to enforce clearer disclosure and watermarking standards.
Protecting independent journalism is also critical. The European Media Freedom Act ensures professional media content is recognized and protected online, requiring platforms to notify recognized media outlets before removing journalistic content and explaining their reasons.
These measures and systems create a comprehensive shield: technology for detecting manipulation, regulation for transparency and accountability, and safeguards for responsible journalism. Public awareness remains crucial.
“There is no single solution,” Kompatsiaris concluded. “We need a combination of AI tools, transparency, regulation, and awareness to effectively combat disinformation.”
This article’s research was funded by the EU’s Horizon Programme. The interviewees’ views don’t necessarily reflect those of the European Commission.














Leave a Reply