The European Union is intensifying its response amid widespread condemnation regarding the infringement of privacy and dignity through Artificial Intelligence (AI).
Brussels is contemplating the classification of the creation of sexual deepfakes as an illegal act under the Artificial Intelligence Act, following controversies over sexualized images generated by Grok AI, a chatbot developed by Elon Musk’s X platform.
Grok Controversy
After receiving significant international backlash, Musk’s company xAI implemented new restrictions in January on the generation of sexually suggestive AI images via Grok. This decision came in response to criticisms that the platform allowed users to digitally alter women’s clothing into bikinis and even create sexualized representations of minors.
The circulation of non-consensual nude images began shortly after Grok’s feature launch, peaking around New Year’s. CNN reported that between January 5 and 6 alone, Grok generated approximately 6,700 sexual images, many involving women or minors.
“Grok is now providing a ‘spicy mode’ for explicit sexual content, with some outputs featuring childlike imagery. This is not spicy; it is illegal and appalling,” remarked EU digital affairs spokesman Thomas Regnier.
The European Commission, acting as the bloc’s digital overseer, stated it would monitor the new measures from X and assess their effectiveness. Officials indicated that if the measures fell short, the EU could utilize its Digital Services Act (DSA) to take action.
European Commission Vice-President Henna Virkkunen mentioned that considerations are underway to explicitly prohibit AI-generated sexual images under the AI Act, categorizing them as unacceptable risks. She highlighted that the DSA helps mitigate the risks of sharing non-consensual sexual content online.
Additionally, the Commission has requested information from X regarding Grok as part of its DSA investigation, mandating the preservation of all related internal documents until year-end.
The Commission previously escalated pressure on X, which faced a fine of €120 million for transparency breaches. The EU emphasized its commitment to enforcing regulations despite pushback from the US administration.
“The DSA is explicit; all platforms must address their issues, as the generated content is unacceptable. Compliance with EU law is not optional; it is mandatory,” Regnier stated during the scandal’s peak in early January.
Recently, around 50 Members of the European Parliament urged the Commission to prohibit AI applications that create nude images within the EU marketplace.

Dependence on X
Despite facing criticism, most senior EU officials continue to utilize X rather than European alternatives, according to dpa research. European Commission President Ursula von der Leyen and other high-ranking officials do not have official accounts on Mastodon, a German alternative, while Virkkunen opened one in January. Other EU politicians are also active on Bluesky, another rising US-based platform.
The Commission continues to defend its presence on X due to its extensive reach: Mastodon claims approximately 750,000 monthly users, whereas X has around 100 million.
The Long Legal Struggle for Online Safety
The journey towards safeguarding minors in the EU is complex, intertwining concerns around privacy, business interests, and protection. Various regulations intersect in this landscape:
Chat Control
In 2022, the Commission proposed a regulation requiring platforms to detect and report abusive content, specifically child sexual abuse material (CSAM) and attempts by predators to contact minors. Supported by child protection organizations, the “Chat Control” plan ignited intense privacy debates within the 27-member bloc and faced accusations of promoting mass surveillance.
Negotiations for the final legislation are anticipated to commence in early 2026, aiming to reconcile the privacy-centric approach of the Parliament with the EU Council’s preference for extensive voluntary scanning privileges. In the meantime, MEPs have extended temporary voluntary scanning measures to April 2026 to prevent a legal gap.

The EU employs the Digital Services Act to penalize online platforms through substantial fines, operational changes, and, if necessary, temporary service suspensions. The DSA mandates platforms to combat illegal content, safeguard users, and enhance transparency.
The AI Act, adopted in 2024, is the first comprehensive legal framework for AI worldwide, instituting a risk-based approach to regulate AI technologies in the EU, ensuring safety, trustworthiness, and respect for fundamental rights while promoting innovation. It prohibits certain unacceptable AI practices, like social scoring, and establishes rules for high-risk AI applications, including restrictions on manipulative uses such as deepfakes targeting children.
Social Media Restrictions
France is evaluating a ban on social media for children under 15 and has been testing an age verification app developed by the European Commission













Leave a Reply