From Buzzwords to Real Harms: The Complaint Against X

From Buzzwords to Real Harms: The Complaint Against X
by Claire Stravato EMES

A recent complaint by a consortium of CSOs, led by AI Forensics, accuses X of violating Article 26(3) of the Digital Services Act (DSA). The core issue is that X allegedly enabled advertisers to target or exclude users based on sensitive personal data, such as political opinions, health status, or trade union affiliations.

Over the past year, civil society organisations have filed multiple complaints under the DSA—targeting not only X, but also LinkedIn, Meta, and Temu—for advertising practices or design choices that potentially violate the regulation’s systemic risk provisions.1 However, we look at this case not just for what it alleges, but for what it helps clarify.

Since its introduction, the term “systemic risk” in the DSA has been widely used but often remains unclear. It is defined broadly—referring to risks to fundamental rights, civic discourse, public security, or minors—yet the law provides little guidance on what exactly to measure or where to look within platforms. This vagueness allows debates to drift between legal platitudes and technical minutiae, leaving regulators, researchers, and users without a shared standard for action. For example, in its announcement, AI Forensics moves between ad mechanics and high-level claims about threats to democracy, which can make it difficult to pinpoint what the real harm is and how it happens.

To make this clearer, we need more than abstract values like "rights" and "democracy." We need to understand how harm is produced through design and how it is experienced in everyday platform use. The taxonomy developed by Shelby et al. (2023)2 offers a concise, user-centered model for doing just that. It shifts the focus from abstract principles to concrete mechanisms—such as representational, allocative, quality-of-service, interpersonal, and societal harms—that help trace how platform design choices turn into real-world risks.

When we apply that approach to the evidence presented in the X complaint, we see how ad system choices map directly to user-level harm. Through this socio-technical lens, the complaint becomes a map of design-enabled risks that regulators can investigate, researchers can test, and users can recognise in their own experience. The table below summarises this perspective.

Framing X’s Violation of Article 26(3)

Identified Patterns Kind of Harm Why Users Should Care DSA Risk Triggered
Shein aimed ads only at people who engaged with French political topics. Representational (political stereotyping); Allocative (withheld offers). Political labels can block you from seeing the same deals or information others receive. Fundamental rights; Civic discourse
Total Energies excluded users interested in green politicians. Allocative (lost opportunities); Societal (climate info gap). Eco-minded users may miss energy offers or climate content, reinforcing silent discrimination. Civic discourse; Environmental misinformation
McDonald’s filtered out people linked to “antidepressant,” “suicide,” or union terms. Representational (mental health stigma); Interpersonal (privacy breach); Quality-of-service (profiling error). Inferred traits you never agreed to share can quietly limit what you see or how you’re valued. Labour rights; Mental health discrimination
Brussels Signal targeted users with far-right interests. Societal (polarisation); Interpersonal (manipulation/harassment). Micro-targeted political content can deepen echo chambers and increase exposure to hostile messaging. Civic discourse; Public security

The complaint raises important questions, but key evidence is still missing: How many users were excluded? How were they profiled? Were the effects lasting or marginal? Under the DSA, it is now X’s responsibility to disclose this data and demonstrate whether these risks materialised or were effectively mitigated.

This is where the harm taxonomy is most useful. It doesn’t just describe what harm looks like—it structures it into concrete, measurable categories that guide regulators and researchers on what to investigate. Without this kind of framing, technical audits drown in detail, and legal arguments drift into abstraction. A shared harm vocabulary connects system design to social consequence—and clarifies what still needs to be proven.

1 CSO-Initiated DSA Complaints: X (AI Forensics, 2024): Accuses X of enabling ad targeting based on sensitive personal data (e.g., political views, health status, union affiliation).
LinkedIn (EDRi, Global Witness, GFF, Bits of Freedom, 2024): Complaint over ad targeting using sensitive inferred traits (e.g., sexuality, politics). LinkedIn disabled sensitive ad targeting EU-wide.
Meta (Bits of Freedom, EDRi, GFF, Convocation, 2025): Complaint filed in Ireland against Meta's profiling-based feed defaults. Case remains pending.
X (Trusted Flaggers, EDRi & ApTI, 2025):Complaint for misdirecting trusted flaggers in non-English languages. Platform began fixing interface, case open.
Temu (BEUC & 17 national groups, 2024): Complaint over dark patterns, traceability failures, and unsafe marketplace conditions. Under investigation.

2 Shelby, R. et al. (2023). Sociotechnical Harms of Algorithmic Systems: Scoping a Taxonomy for Harm Reduction. AIES '23 Conference.