Brussels — The European Times – European Ombudswoman Teresa Anjinho has launched an inquiry into the European Commission’s management of transparency, inclusivity, and accountability in developing harmonised standards for artificial intelligence (AI), a key component of the upcoming EU AI Act implementation.
The investigation, labeled Case 1974/2025/MIK, was initiated following a complaint from a civil society organization about opaque practices within European standardisation bodies. The complainant noted that these groups are not required to disclose participant identities or publish meeting minutes, thus shielding the process from public scrutiny.
Questions for the Commission
Initially, the Ombudswoman has requested the Commission to clarify:
- the composition and selection criteria of the expert groups;
- the transparency and participation rules applied by the European Committee for Standardisation (CEN) and the European Committee for Electrotechnical Standardisation (CENELEC);
- the methods the Commission uses to monitor and evaluate the standardisation process; and
- the mechanisms in place to review and ensure stakeholder balance.
Her office also asked for internal documents related to the Commission’s oversight of these procedures.
Why the issue matters
Harmonised standards provide a technical path to legal compliance in EU legislation. Once recognised by the Commission, they enable companies to prove their products, services, or algorithms comply with the bloc’s regulatory requirements, effectively acting as gateways to the EU market.
The Commission’s May 2023 standardisation request assigned CEN and CENELEC to translate the AI Act’s principles into workable norms, ensuring the resulting standards minimize risks to safety and fundamental rights. However, these private bodies are now addressing high-impact ethical and social dimensions usually reserved for policymakers, raising questions about democratic oversight.
A test for EU accountability in AI
The Ombudswoman’s inquiry reflects a growing concern that technical standardisation, often seen as neutral, could impact how AI systems interact with human rights and public interest across Europe. Her findings may determine if the EU’s commitment to “trustworthy AI” aligns with transparent and inclusive governance.














Leave a Reply