Chapter 3|Due Diligence Obligations - Online Platforms|📖 8 min read
1. Providers of online platforms shall suspend, for a reasonable period of time and after having issued a prior warning, the provision of their service to recipients of the service that frequently provide manifestly illegal content.
2. Providers of online platforms shall suspend, for a reasonable period of time and after having issued a prior warning, the processing of notices and complaints submitted through the notification and action mechanisms and the internal complaint-handling systems provided for in Articles 16 and 20 by individuals or entities or by complainants that frequently submit notices or complaints that are manifestly unfounded.
3. When assessing whether a recipient, an individual, an entity or a complainant engaged in the misuse described in paragraphs 1 and 2, providers of online platforms shall apply the criteria set out in those paragraphs in a timely, diligent, non-arbitrary and objective manner. Where the provider assesses that the behaviour has a potential to cause serious harm, they shall suspend the provision of the service or the processing of notices and complaints promptly.
4. The assessment shall take account of all relevant facts and circumstances apparent from the information available to the provider of the online platform. Those circumstances shall include at least the following:
(a) the absolute numbers of items of manifestly illegal content or manifestly unfounded notices or complaints, submitted within a given timeframe;
(b) their relative proportion to the total number of items of information provided or notices submitted within a given timeframe;
(c) the severity of the misuses and their consequences;
(d) where it can be identified, the intention of the recipient, individual, entity or complainant.
5. Providers of online platforms shall set out in a clear and detailed manner, in their terms and conditions, their policy in respect of the misuse referred to in paragraphs 1 and 2, including an explanation of the assessment and decision-making process, and the possible limitations of the use of their service.
Understanding This Article
Article 23 addresses a critical platform management challenge: what to do about users who chronically abuse platform services, either by repeatedly posting clearly illegal content or by filing masses of frivolous reports/complaints. These bad actors can overwhelm moderation systems, waste platform resources, harass innocent users, and undermine legitimate reporting mechanisms. Article 23 authorizes platforms to temporarily suspend these abusive users.
Paragraph 1 targets serial illegal content posters - users who persistently upload material that's 'manifestly illegal' (clearly, obviously illegal without need for complex legal analysis). The classic example is accounts repeatedly posting CSAM, terrorist propaganda, or counterfeit products despite removals and warnings. Platforms can suspend such accounts for 'reasonable periods' after warnings.
Paragraph 2 targets weaponized reporting - users or entities filing massive numbers of 'manifestly unfounded' notices or complaints to harass competitors, silence critics, or overwhelm platform systems. For example, someone submitting hundreds of copyright claims against content they don't own, or mass-reporting competitor products as counterfeit when they're legitimate. Platforms can suspend these abusers' ability to file reports.
The 'manifestly' qualifier is crucial in both paragraphs. Content/reports must be CLEARLY illegal/unfounded, not borderline or debatable cases. Article 23 doesn't authorize punishment for good-faith errors or edge cases - only obvious, egregious abuse.
Prior warning requirements (both paragraphs) provide due process. Platforms can't immediately suspend; they must first warn users that continued behavior will trigger suspension. This gives users opportunity to correct behavior before facing consequences. However, paragraph 3 includes exception: when behavior 'has potential to cause serious harm,' platforms can suspend promptly without extended warning periods (e.g., ongoing CSAM posting).
Paragraph 3's assessment standards - 'timely, diligent, non-arbitrary and objective' - prevent discriminatory or capricious enforcement. Platforms must apply consistent standards based on evidence, not favoritism or bias.
Paragraph 4 specifies assessment factors: (a) absolute numbers (how many violations/frivolous reports); (b) relative proportion (compared to user's total activity - 100 violations from 10,000 posts is different than 100 from 150 posts); (c) severity (CSAM is more severe than minor terms violations); (d) intent when identifiable (deliberate, malicious abuse vs careless mistakes). This multi-factor analysis ensures nuanced, fair decisions.
Critically, suspensions must be temporary ('reasonable period'), not permanent bans. Article 23 isn't authorization for permanent account termination based on repeat violations - it's a cooling-off mechanism. Platforms might suspend for days, weeks, or months depending on severity, but not indefinitely. (Though platforms retain separate authority under their terms for permanent bans in extreme cases.)
Paragraph 5 requires transparency - platforms must explain their misuse policies, assessment processes, warning procedures, and suspension durations in their terms and conditions. Users need to understand the rules and consequences before potentially violating them.
Key Points
Platforms can suspend users who repeatedly post manifestly illegal content
Platforms can suspend processing of notices/complaints from serial abusers
Must issue prior warning before suspension
Suspension must be for reasonable period, not permanent ban
Assessment must be timely, diligent, non-arbitrary, and objective
Must consider absolute numbers, proportions, severity, and user intent
Enables platforms to protect against weaponized reporting and serial offenders
Policies must be clearly explained in terms and conditions
Practical Application
For Serial CSAM Posters: TikTok user repeatedly uploads child sexual abuse material. TikTok removes it each time, reports to authorities per Article 18, and issues warnings. After third instance, TikTok suspends the account for 90 days under Article 23(1). The content is manifestly illegal (CSAM), repeated despite warnings, and causes serious harm (child exploitation), justifying suspension. After suspension period, if user returns and posts CSAM again, TikTok may permanently ban (though that goes beyond Article 23 into general terms enforcement).
For Counterfeit Sellers: Amazon marketplace seller repeatedly lists counterfeit luxury goods despite removals. Amazon warns: 'Further counterfeit listings will result in account suspension.' Seller continues. Amazon assesses under Article 23: seller posted 50 counterfeit listings from 75 total (67% proportion), products are manifestly illegal (trademark infringement), behavior is deliberate (seller knows items are fake). Amazon suspends selling privileges for 60 days.
For Weaponized Copyright Claims: A user submits 500 copyright takedown notices to YouTube claiming ownership of videos they don't own - apparently targeting a competitor's channel. YouTube investigates, finds 495 of 500 claims are manifestly unfounded (user doesn't own copyrights, used false information). YouTube warns the user their notice-filing privileges will be suspended. User continues filing false claims. YouTube suspends user's ability to file copyright notices for 180 days under Article 23(2). The user can still use YouTube, upload videos, etc., but cannot file copyright claims during suspension.
For Competitor Harassment: An e-commerce seller mass-reports competitor products as violating safety standards, filing hundreds of Article 16 notices. The platform investigates and determines 90% are manifestly unfounded - products meet all safety requirements, seller is making false reports to harm competitor. Platform warns the seller, then suspends their ability to file product reports for 120 days. Seller can still sell their own products but cannot file complaints about others.
For Prior Warning Process: Facebook user posts content removed for hate speech violation. Facebook sends warning: 'This content violated our hate speech policy. Repeated violations may result in account suspension.' User posts similar content again; second warning. Third violation triggers 7-day account suspension under Article 23(1). The progression (warning → warning → suspension) provides due process.
For Proportionality Assessment: Instagram user posts 100,000 photos over years. 15 are removed for terms violations. The platform assesses: 15 violations from 100,000 posts (0.015% proportion), violations are relatively minor (no illegal content, just community guidelines), spread over years, no pattern of intent. Platform determines this doesn't justify Article 23 suspension - the proportion and severity don't meet threshold for 'frequent' misuse.
For Serious Harm Exception: Twitter (X) account begins posting terrorist propaganda with specific, credible threats of imminent violence. Twitter determines this has 'potential to cause serious harm' (paragraph 3). Rather than going through extended warning process, Twitter immediately suspends the account and reports to law enforcement per Article 18. The serious harm exception allows bypassing normal prior warning procedures.
For Complaint System Abuse: A user files 200 Article 20 internal complaints appealing every single moderation action across their account, including clear violations (CSAM, terrorism, obvious counterfeits). 195 complaints are manifestly unfounded - not even arguable cases, just wasting platform resources. Reddit warns: 'You're filing frivolous appeals. Continued abuse will suspend appeal rights.' User continues. Reddit suspends user's Article 20 complaint filing for 90 days. User's content moderation still happens, they receive Article 17 reason statements, but can't file internal appeals during suspension.
For Intent Assessment: A user files 50 illegal content reports, and 40 turn out unfounded. However, investigation reveals the user made good-faith mistakes - they genuinely believed content violated rules based on reasonable but incorrect interpretation. Platform determines there's no malicious intent - just over-zealous reporting. Article 23 requires considering intent (paragraph 4(d)). Platform may provide education about proper reporting rather than suspension, since there's no bad faith abuse.
For Temporary Nature: YouTube suspends a channel for 30 days due to repeat copyright violations. After suspension ends, the channel returns. If violations continue, YouTube might suspend again for longer period (60 days, 90 days, etc.). But Article 23 suspensions are always temporary cooling-off periods. Permanent termination requires different justification (though platforms have that authority under general terms enforcement).
For Transparency Requirements: TikTok's terms must clearly state: 'Repeated posting of illegal content will result in warnings followed by temporary account suspension. We assess: number of violations, proportion to total content, severity, and intent. First suspension: 7 days. Second suspension: 30 days. Third suspension: 90 days. Similarly, filing manifestly unfounded reports will result in suspension of reporting privileges after warnings.'
For Civil Society Concerns: If a platform suspends a journalist's account claiming they 'repeatedly post illegal content' but the content is clearly lawful reporting on government corruption, the suspension is arbitrary misuse of Article 23. The content wasn't 'manifestly illegal' - platform applied Article 23 pretextually to silence legitimate journalism. Users can challenge such abuse through Article 20 appeals, Article 21 dispute settlement, Article 53 judicial remedies, and regulatory complaints.