Article 20

Internal complaint-handling system

1. Providers of online platforms shall provide recipients of the service with access to an effective internal complaint-handling system that enables them to lodge complaints against the following decisions taken by the provider:

(a) the removal or disabling of access to specific items of information provided by the recipients;

(b) the restriction of the visibility of specific items of information provided by the recipients;

(c) the suspension, termination or other restriction of monetary payments;

(d) the suspension or termination of the provision of the service in whole or in part to the recipients;

(e) the suspension or termination of the recipients' account.

2. Providers of online platforms shall ensure that their internal complaint-handling systems are easy to access, user-friendly and enable and facilitate the submission of sufficiently precise and adequately substantiated complaints.

3. Providers of online platforms shall handle complaints submitted through their internal complaint-handling systems in a timely, non-discriminatory, diligent and non-arbitrary manner. Where a complaint contains sufficient grounds for the provider to consider that its decision not to act upon the complaint was unfounded or that the information to which the complaint relates is not illegal and is not incompatible with its terms and conditions, or contains information indicating that the complainant's conduct which gave rise to that decision was sufficiently justified, the provider shall reverse its decision referred to in paragraph 1 without undue delay.

4. Providers of online platforms shall inform complainants without undue delay of their reasoned decision in respect of the information or activity to which the complaint relates. Such notification shall also include information on the possibility of out-of-court dispute settlement provided for in Article 21 and on other available possibilities for redress.

5. Providers of online platforms shall ensure that the decisions referred to in paragraph 4 are taken under the supervision of appropriately qualified staff, and not solely on the basis of automated means.

6. Providers of online platforms shall take the necessary measures to facilitate the lodging of complaints for persons with disabilities, in accordance with Union and national law on accessibility requirements.

Understanding This Article

Article 20 establishes THE core user rights mechanism in the DSA: the right to appeal content moderation decisions. Before the DSA, platforms could remove content, suspend accounts, or restrict users with minimal recourse - appeals processes were optional, inconsistent, and often ineffective. Article 20 changes this fundamentally by mandating comprehensive internal complaint systems for all platforms.

The scope is comprehensive - users can challenge ANY significant moderation action: content removal, visibility restrictions (shadowbanning, demotion), account suspension or termination, payment restrictions (demonetization), or service restrictions. If a platform's decision affects a user's ability to use the service or access content, the user can appeal.

Paragraph 2's accessibility requirements are crucial. Complaint systems must be 'easy to access' - no hidden forms buried in help centers, no labyrinthine processes requiring expert navigation. 'User-friendly' means intuitive interfaces, clear instructions, reasonable evidence requirements. Users shouldn't need legal expertise to file complaints.

Paragraph 3 contains the substantive review standard. Platforms must handle complaints 'in a timely, non-discriminatory, diligent and non-arbitrary manner.' Timely means reasonable response times - days or weeks, not months. Non-discriminatory means treating similar cases similarly. Diligent means careful, thorough review. Non-arbitrary means principled, reasoned decisions based on evidence and policy, not whims or favoritism.

Most importantly, paragraph 3 requires platforms to REVERSE erroneous decisions 'without undue delay' when complaints show the original decision was unfounded, content wasn't illegal/violating, or user conduct was justified. Complaints aren't mere formalities - they must lead to meaningful review and correction of errors.

Paragraph 5 is THE anti-automation provision. While initial moderation decisions can be fully automated (Article 17 merely requires disclosure), complaint reviews CANNOT be purely algorithmic. 'Appropriately qualified staff' must supervise decisions - humans must be involved. This ensures nuanced judgment, contextual understanding, and error correction that algorithms alone cannot provide.

This creates a two-tier system: automated first-line moderation for scale, human-supervised appeals for accuracy. Platforms can use AI to scan millions of posts, but humans must review contested decisions. This balance enables both scale and fairness.

Key Points

  • ALL platforms (except micro/small per Article 19) must provide internal complaint systems
  • Users can appeal content removals, visibility restrictions, account suspensions, payment restrictions
  • Complaint systems must be easy to access and user-friendly
  • Platforms must handle complaints in timely, non-discriminatory, diligent manner
  • Platforms must reverse decisions when complaints show sufficient grounds
  • Decisions must involve qualified human staff supervision, not pure automation
  • Users must be informed of outcomes and alternative redress options
  • This is THE fundamental user rights protection mechanism in DSA

Practical Application

For Content Removal Appeals: When Instagram removes a post for alleged nudity but the user believes it's artistic expression protected by terms/law, the user files an Article 20 complaint. Instagram must provide a clear 'Appeal' button in the removal notification. The appeal form should ask: 'Why do you believe this decision was wrong?' User explains: 'This is a breastfeeding photo, which your policies explicitly allow.' Instagram's compliance team (humans, not just algorithms) reviews the complaint, reassesses the image, and if the complaint has merit, restores the post and notifies the user: 'We reviewed your appeal and restored your post. We apologize for the error.'

For Account Suspension Appeals: TikTok suspends an account for alleged spam. User appeals: 'I wasn't spamming, I was participating in a viral challenge where everyone posts similar content. This is community engagement, not spam.' TikTok's human reviewers assess the context, check if the behavior fits spam patterns or legitimate challenge participation, and decide. If complaint justified, TikTok immediately reinstates the account and may refine its spam detection to avoid similar errors.

For Visibility Restriction Appeals: Twitter (X) reduces a tweet's reach, claiming it's misleading information. User complains: 'This is opinion, not misinformation. I'm expressing a political view, not stating facts.' Twitter's team reviews: Is this factual claim or opinion? Is there genuine misinformation or just disagreeable perspective? If the tweet is opinion protected by freedom of expression, Twitter restores full visibility and notifies the user of the correction.

For Demonetization Appeals: YouTube demonetizes a video for 'advertiser-unfriendly content.' Creator appeals: 'This is educational content about historical violence, which your policies say is monetizable.' YouTube reviewers watch the video, assess context and tone, determine if it's genuinely educational or exploitative. If educational, they restore monetization and explain: 'We reviewed your video in full context and restored monetization. Our automated systems sometimes miss educational framing.'

For Wrongful Strikes: A YouTuber receives a copyright strike for using public domain music. They appeal with evidence the music is public domain. YouTube must review the appeal with qualified staff (not just automated copyright matching), verify the public domain status, remove the strike, and restore the video. The complaint system enables correcting algorithmic errors.

For Hate Speech False Positives: Facebook removes a comment as hate speech. The user appeals: 'I'm a member of the group being discussed and this is self-referential humor within my community.' Facebook's reviewers assess: context of speaker's identity, group norms, whether this is reclamation vs attack. If the complaint demonstrates the automated system missed context, Facebook restores the comment with explanation: 'We reviewed the context you provided and determined this doesn't violate our hate speech policy.'

For Accessibility Compliance: Platforms must ensure disabled users can file complaints. This means: screen reader compatibility for blind users, alternative input methods for users with motor impairments, clear language for cognitive accessibility, visual clarity for low vision users. A deaf user appealing a video removal shouldn't face barriers like audio-only verification. A blind user should navigate the entire complaint process with assistive technology.

For Qualified Staff Requirements: Platforms must employ or contract with people trained in content policy, regional law variations, cultural context, freedom of expression principles, and nuanced judgment. Junior moderators reviewing routine cases should have senior oversight. Complex cases (political speech, artistic expression, satire, regional law differences) should escalate to experienced specialists. 'Qualified' doesn't mean lawyers for every case, but does mean trained, capable humans supervising decisions.

For Response Timing: Platforms should acknowledge complaint receipt immediately (automated confirmation) and provide decisions within reasonable timeframes. For urgent matters (wrongful account termination affecting livelihoods, time-sensitive content), 24-48 hours. For routine appeals (single post removals), 3-7 days. For complex cases requiring legal analysis, potentially longer but with interim status updates. Users shouldn't wait months in limbo.

For Mass Appeals: When platforms make systemic errors (over-aggressive algorithm updates, policy interpretation mistakes), they might receive thousands of similar appeals. Rather than reviewing each identically, platforms should identify the pattern, fix the systemic issue, and proactively restore affected content. However, each user should still receive individual notification of the outcome.

For Repeat Offenders: If a user repeatedly appeals frivolous complaints (appealing clear violations without legitimate grounds), platforms can implement reasonable safeguards like rate limiting or requiring more detailed explanations, but cannot eliminate appeal rights entirely. The system must remain accessible to legitimate complainants even while managing abuse.

For Cross-Platform Coordination: User banned from YouTube for content violations can't appeal to Instagram about YouTube's decision - each platform operates independent complaint systems. However, if a user believes decisions on multiple platforms stem from the same unfair targeting, they might file complaints with each platform separately and potentially escalate to out-of-court dispute settlement (Article 21) or regulatory complaints if patterns suggest systematic rights violations.

For Small Platforms: While micro/small enterprises are exempt (Article 19), medium platforms (50-250 employees) still must implement Article 20. Their systems can be simpler - email-based appeals to a small team rather than sophisticated portals - but must meet core requirements: accessibility, timely review, human supervision, reasoned decisions, meaningful error correction.