Article 22

Trusted flaggers

1. Providers of online platforms shall take the necessary technical and organisational measures to ensure that notices submitted by trusted flaggers through the mechanisms referred to in Article 16 are given priority and are processed and decided upon without undue delay.

2. The status of trusted flagger under this Regulation shall be awarded, upon application by any entity, by the Digital Services Coordinator of the Member State in which the applicant is established, to an entity that has demonstrated that it meets all of the following conditions:

(a) it has particular expertise and competence for the purposes of detecting, identifying and notifying illegal content;

(b) it is independent from any provider of online platforms;

(c) it carries out its activities for the purposes of submitting notices diligently, accurately and objectively.

3. Trusted flaggers shall publish annual transparency reports detailing their reporting activities including the number of notices submitted pursuant to Article 16 categorised by: (a) the type of alleged illegal content notified; (b) the number of notices for each type of online platform to which the notices were submitted; (c) the action taken by providers of online platforms in response to those notices.

4. Digital Services Coordinators shall communicate to the Commission, without undue delay, information necessary to identify all trusted flaggers that they have designated, together with their Member State of establishment and the relevant area(s) of expertise. The Commission shall make that information publicly accessible in a central database and shall publish an annual overview of the activities of the trusted flaggers.

5. Trusted flaggers shall cooperate with the Digital Services Coordinators and, where relevant, with other competent authorities, for the purpose of facilitating the enforcement of this Regulation.

6. Where a provider of online platforms has reason to believe that a trusted flagger has submitted a significant number of insufficiently precise or inadequately substantiated notices, including information that was manifestly illegal for reasons unrelated to the purported grounds for submission, it shall communicate that fact and the reasons for its belief to the Digital Services Coordinator that awarded the status to that trusted flagger.

7. The Digital Services Coordinator that awarded the status of trusted flagger shall revoke that status if it determines that the entity which has been awarded such status no longer meets the conditions set out in paragraph 2. Before revoking that status, the Digital Services Coordinator shall give that entity an opportunity to react to the findings and to remedy any identified issues.

Understanding This Article

Article 22 creates a 'trusted flagger' status for organizations with proven expertise in identifying illegal content. This addresses a key content moderation challenge: platforms receive millions of user reports daily, many low-quality, making it difficult to prioritize truly problematic content. Trusted flaggers cut through the noise, providing high-confidence reports that platforms must process expeditiously.

The system recognizes that certain entities - NGOs combating child exploitation, organizations monitoring hate speech, consumer protection groups tracking scams, IP rights organizations identifying counterfeits - develop deep expertise in specific illegal content types. Their reports are inherently more reliable than average user reports. Article 22 formalizes this reality, requiring platforms to treat these expert reports with priority.

Digital Services Coordinators award trusted flagger status after verifying three conditions: (1) particular expertise and competence in detecting/identifying specific illegal content types; (2) independence from platforms (no financial ties, no conflicts ensuring objectivity); (3) diligent, accurate, objective reporting practices (track record of high-quality notices). This rigorous vetting ensures only genuinely expert, reliable entities receive status.

Priority processing (paragraph 1) means platforms must handle trusted flagger notices faster than ordinary user reports and with greater attention. While ordinary reports might wait in queues for days, trusted flagger notices should receive near-immediate human review. Platforms can't treat trusted flagger reports the same as random users' reports - that defeats the entire purpose.

Transparency requirements (paragraph 3) create accountability. Trusted flaggers must publicly report their activities: what they're reporting, to which platforms, and what actions platforms took. This enables external assessment of both flagger quality and platform responsiveness. If a trusted flagger rarely triggers platform action, perhaps their expertise is questionable. If platforms consistently ignore trusted flagger notices, regulators can investigate.

The system includes quality control. If platforms observe degraded notice quality from a trusted flagger (paragraph 6), they can report concerns to the awarding Digital Services Coordinator. The Coordinator can then revoke status (paragraph 7) if the flagger no longer meets requirements. This prevents trusted flagger status from becoming a permanent credential regardless of performance - it's conditional on maintained quality.

The Commission maintains a central database of all trusted flaggers EU-wide, enabling platforms to identify legitimate trusted flaggers and enabling public transparency about who has this status and their areas of expertise.

Key Points

  • Trusted flaggers are expert entities designated by Digital Services Coordinators
  • Their illegal content notices receive priority processing by platforms
  • Must demonstrate expertise, independence from platforms, and accuracy
  • Must publish annual transparency reports on their reporting activities
  • Status can be revoked if quality deteriorates or conditions not met
  • Creates fast-track for high-quality illegal content reports from experts
  • Enables specialized NGOs and organizations to efficiently flag violations
  • Platforms must process trusted flagger notices 'without undue delay'

Practical Application

For NGO Designation: The Internet Watch Foundation (IWF), a UK organization specializing in child sexual abuse material, applies to their Digital Services Coordinator for trusted flagger status. They demonstrate: (1) expertise through decades of CSAM identification work, specialized technical tools, and trained analysts; (2) independence as a non-profit with no platform funding or conflicts; (3) track record of accurate reporting with extremely low false positive rates. Upon verification, they receive trusted flagger status for CSAM across all EU platforms.

For Priority Processing: When IWF submits Article 16 notice to Facebook about suspected CSAM, Facebook's systems automatically route it to specialized CSAM review team for immediate human attention - not to general moderation queues. Review happens within minutes, not hours or days. If confirmed, content is immediately removed, account disabled, and reported to law enforcement. The trusted flagger designation ensures this critical content gets urgent treatment.

For Hate Speech Organizations: An NGO specializing in antisemitism monitoring applies for trusted flagger status, demonstrating expertise in identifying Holocaust denial, antisemitic conspiracy theories, and coordinated hate campaigns. Once designated, their reports to YouTube about antisemitic content receive priority review. YouTube's hate speech specialists assess these notices expeditiously, recognizing the organization's proven expertise.

For Consumer Protection: A consumer protection organization specializing in online scams and fraud receives trusted flagger status. When they identify marketplace listings on Amazon for counterfeit safety equipment (fake electrical components, fraudulent certifications), their notices trigger immediate Amazon review and swift removal, protecting consumers from dangerous products.

For IP Rights Organizations: Major record labels' anti-piracy organization becomes trusted flagger for copyright content. Their notices to platforms about copyright infringement receive expedited review. However, because copyright can involve fair use complexity, platforms still conduct substantive review - trusted flagger status means priority, not automatic takedown.

For Transparency Reporting: A trusted flagger monitoring extremist content publishes annual report: 'Submitted 5,247 notices about terrorist content to 8 platforms: TikTok (1,200 notices, 95% removal rate), YouTube (2,100 notices, 92% removal rate), Facebook (1,500 notices, 88% removal rate), etc. Content categories: ISIS propaganda (45%), far-right extremism (30%), other (25%).' This data reveals platform responsiveness and flagger activity patterns.

For Quality Degradation: Instagram notices a trusted flagger (previously focused on trademark counterfeits) has started submitting notices about competitive products that aren't actually counterfeit - apparently trying to remove legitimate competitors. Instagram reports this to the Coordinator with evidence: 'This flagger submitted 200 notices last month; we investigated all and found 150 were for legitimate products, not counterfeits. This suggests misuse of status.' The Coordinator investigates and may revoke or suspend status pending correction.

For Multi-Platform Coordination: When a trusted flagger identifies coordinated misinformation campaign spreading across multiple platforms (Facebook, Twitter, TikTok, YouTube), they can submit notices to all platforms simultaneously. Each platform processes the notice with priority. This enables efficient cross-platform action against distributed threats.

For Specialized Expertise: Different trusted flaggers specialize in different areas: Some focus on CSAM, others on hate speech, others on terrorism, others on IP infringement, others on consumer scams. Platforms benefit from this distributed expertise - no single platform can be expert in everything, but trusted flaggers bring domain-specific knowledge.

For False Accusations: If a trusted flagger is accused of bias or inaccurate reporting, the transparency reports provide accountability. Researchers, journalists, and platforms can analyze reports: What's their accuracy rate? Do they disproportionately target certain viewpoints? This scrutiny helps maintain system integrity.

For Small Platforms: Trusted flagger status is particularly valuable for smaller platforms with limited moderation resources. When a micro-platform receives a trusted flagger notice about CSAM, they can act with confidence knowing the flagger is rigorously vetted expert - reducing need for extensive internal analysis the small platform might lack capacity for.

For Commission Database: Users, researchers, and platforms can access the Commission's database to see: Who are the trusted flaggers? What expertise areas? Which Member State designated them? This enables verification (confirming an entity claiming trusted flagger status actually has it) and transparency (public knowledge of who influences platform moderation through priority reports).