Transparency reporting obligations for providers of intermediary services
Chapter 3|Due Diligence Obligations - All Intermediary Services|📖 4 min read
1. Providers of intermediary services shall make publicly available, in a clear, easily comprehensible and detailed manner, at least once a year, reports on any content moderation that they engaged in during the relevant period. Those reports shall include, in particular, information on:
(a) the number of orders received from Member State authorities, categorised by the type of illegal content concerned, the Member State issuing the order, the median time needed to inform the authority of the outcome, and the outcome of the orders;
(b) any notices submitted in accordance with Article 16, including the number of notices categorised by the type of alleged illegal content, the median time needed to take the decision, the type of content moderation measures taken and the outcome of the notices;
(c) the number of complaints received through internal complaint-handling systems, the basis for such complaints, the decisions taken, the median time needed to take those decisions, and the number of instances in which those decisions were reversed;
(d) the use of automated means for content moderation, including a qualitative description of those means;
(e) the number of suspensions or terminations of service;
(f) the number and type of measures taken that affect the availability, visibility and accessibility of information.
2. The Commission may adopt implementing acts specifying the templates for the reports.
Understanding This Article
Article 15 establishes comprehensive transparency reporting requirements for all intermediary services, creating public accountability for content moderation practices. These annual reports provide crucial visibility into how providers handle illegal content, apply their terms, and respond to users and authorities.
The reporting covers the entire content moderation ecosystem: orders from governments (Article 9), notices from users (Article 16), internal complaints (Article 20), automated moderation tools, account actions, and content visibility restrictions. This holistic approach prevents providers from selectively disclosing favorable data while hiding problematic practices.
Critically, reports must include both quantitative data (numbers of orders, notices, complaints) and qualitative information (types of content, outcomes, median response times). Timing metrics reveal whether providers act expeditiously as required. Outcomes show whether providers comply with orders or reject invalid requests.
The automated moderation disclosure requirement addresses the 'black box' problem. Users and regulators need to know when and how algorithms make content decisions. Providers must qualitatively describe these systems - their purposes, how they work, what content they target - though technical details protecting security can be withheld.
These reports enable multiple forms of accountability: public scrutiny by civil society, academic research on platform governance, regulatory oversight by Digital Services Coordinators, and comparative analysis across platforms. They make the previously opaque content moderation process visible and verifiable.
Key Points
All intermediary services must publish annual transparency reports
Must report on orders from authorities and their outcomes
Must report notices from users about illegal content
Must report complaints received and how they were handled
Must describe use of automated content moderation tools
Must report account suspensions and terminations
Reports must be clear, comprehensible, and publicly accessible
Enables accountability and public oversight of moderation practices
Practical Application
For Platform Reports: Facebook's transparency report must disclose: '15,234 removal orders from EU authorities (2,345 for hate speech, 8,912 for copyright, 3,977 for illegal products); median response time 18 hours; 85% complied with, 15% challenged.' This granularity enables assessing whether Facebook responds quickly and complies appropriately.
For User Notices: YouTube must report: 'Received 2.5 million Article 16 notices from users; 1.2 million for alleged copyright infringement, 800,000 for community guidelines violations, 500,000 other; median decision time 24 hours; 65% resulted in content removal, 20% content remained available, 15% account action taken.'
For Automated Moderation: TikTok must describe: 'We use automated systems to detect child safety violations, including: (1) hash-matching against CSAM databases (PhotoDNA/NCMEC); (2) machine learning models trained to identify predatory behavior patterns; (3) age estimation algorithms for age-restricted content. These systems automatically remove flagged content pending human review.' Users learn when AI affects their content.
For Complaints: Instagram must report internal complaint outcomes: 'Received 500,000 complaints about content removal decisions; 200,000 for alleged terms violations, 150,000 for alleged illegal content, 150,000 other; median resolution time 3 days; 15% decisions reversed, 75% upheld, 10% other outcomes.' This reveals whether Instagram meaningfully reviews complaints.
For Account Actions: Reddit must disclose: 'Suspended 50,000 accounts (30,000 for spam, 15,000 for hate speech, 5,000 for other violations); permanently banned 10,000 accounts; restricted posting for 25,000 accounts.' This transparency helps users understand enforcement patterns.
For Content Restrictions: Twitter (X) must report shadowbanning and visibility limitations: 'Applied visibility restrictions to 100,000 posts for violating civic integrity policies; reduced algorithmic distribution for 50,000 accounts due to harmful behavior patterns; applied warning screens to 25,000 posts containing sensitive content.' Previously invisible actions become transparent.
Comparative Analysis: Researchers can compare Facebook's hate speech removal rate (65%) vs Twitter's (72%) vs TikTok's (80%), identifying which platforms most aggressively moderate. Civil society can identify platforms that rarely reverse decisions (potential over-moderation) or have slow response times.
For Small Providers: A small web hosting service reports: 'Received 5 removal orders from courts (3 for copyright, 2 for defamation); median response 12 hours; complied with all 5. Received 23 user notices about hosted content; 18 resulted in customer notification and content review, 5 in content removal. No complaints received. No automated moderation used. No account suspensions.' Even small-scale reporting provides accountability.