Chapter 3|Due Diligence Obligations - Hosting Services|📖 7 min read
1. Where a hosting service provider removes or disables access to specific items of information provided by a recipient of the service, or otherwise restricts the visibility of such information, or suspends or terminates the provision of its service to a recipient, on the grounds that the information is illegal content or incompatible with its terms and conditions, it shall without undue delay provide the affected recipient of the service with a clear and specific statement of reasons for its decision.
2. The statement of reasons referred to in paragraph 1 shall at least contain the following information:
(a) information on whether the decision entails either the removal of, the disabling of access to or the restriction of the visibility of the information, or the suspension or termination of the provision of the service, and, where relevant, the territorial scope of the decision and its duration;
(b) the facts and circumstances relied on in taking the decision, including where relevant, whether the decision was taken pursuant to a notice submitted in accordance with Article 16 or to an order received in accordance with Article 9, and information on the use of automated means in taking the decision, including where the decision was taken in respect of content detected or identified using automated means;
(c) where applicable, information on the use of automated means in taking the decision, including where the decision was taken in respect of content detected or identified using automated means;
(d) where the decision concerns allegedly illegal content, a reference to the legal ground relied on and explanations as to why the information is considered to be illegal content on that ground;
(e) where the decision concerns incompatibility with the terms and conditions of the hosting service provider, a reference to the contractual ground relied on and explanations as to why the information is considered to be incompatible with that ground;
(f) clear and user-friendly information on the possibilities for redress available to the recipient of the service in respect of the decision.
3. The statement of reasons shall be provided to the recipient on a durable medium, which enables the recipient to store the statement for an adequate period for the exercise of the right of redress, and shall be written in a language that may be easily understood by the recipient of the service.
4. This Article shall not apply where the information provided by the recipient is manifestly illegal content. In such cases, the provider shall inform the recipient of its decision and of the reasons therefore, without undue delay.
Understanding This Article
Article 17 is THE transparency cornerstone for content moderation. Every time a hosting provider takes action against content or users - removal, restriction, suspension, termination - they MUST explain the decision to the affected person. This transforms content moderation from an opaque, unaccountable process into a transparent, reviewable system.
Before the DSA, platforms could remove content or ban users with minimal explanation - generic messages like 'This violates our policies' or 'Your account has been suspended.' Users had no real understanding of what they did wrong, what evidence was considered, or how to appeal. Article 17 changes this fundamentally.
The required statement must be comprehensive and specific. It can't just say 'hate speech violation' - it must explain WHAT was said, WHY it's considered hate speech, WHICH legal ground or policy provision was violated, WHAT facts support the decision, and HOW to challenge it. If an algorithm made or informed the decision, this must be disclosed.
The 'durable medium' requirement (paragraph 3) ensures users can retain and reference these statements. A durable medium is something permanent users can store - like email, downloadable PDF, or account dashboard archive. Temporary on-screen messages that disappear don't suffice. Users need permanent records for potential appeals or legal challenges.
Language accessibility is crucial - statements must be in language users understand. For EU users, this typically means the language they use on the platform. A German user shouldn't receive English-only explanations they can't understand.
The manifestly illegal content exception (paragraph 4) applies only to obviously illegal material where detailed explanation isn't necessary - like CSAM where detailed description would republish abusive content, or ISIS beheading videos. Even here, users must be informed of the decision and basic reasons, just without full statement requirements.
Key Points
Hosting providers must explain ALL content moderation decisions to affected users
Statement must explain what action was taken and why
Must identify facts relied upon, including whether from Article 16 notice or Article 9 order
Must disclose use of automated decision-making
Must cite specific legal grounds (for illegal content) or terms provisions (for policy violations)
Must explain redress options available to user
Statement must be on durable medium and in understandable language
Exception only for manifestly illegal content (CSAM, terrorist content)
Practical Application
For Content Removal: When Instagram removes a post, it must send: 'We removed your post [URL] on [date]. The post showed [description] which violates EU law against hate speech [citation] because it incites violence against [group]. This decision was made by our content review team after user report. You can appeal this decision through [link] within 6 months.'
For Account Suspension: When TikTok suspends an account, the notification must state: 'We suspended your account (@username) for 7 days starting [date]. This is your second violation of our Community Guidelines Section 4.2 (Harassment and Bullying). On [date], you commented: [quote]. This violates our policy because [explanation]. Our automated systems flagged this comment, and a human reviewer confirmed the violation. You can appeal at [link].'
For Automated Decisions: When YouTube's Content ID automatically blocks a video: 'Your video was blocked in all countries on [date]. Our automated Content ID system detected copyrighted audio from 0:15-2:30 matching [work] owned by [copyright holder]. Copyright holders can claim videos containing their works. You can dispute this claim at [link] if you believe you have rights to use this content (fair use, license, etc.).'
For Terms Violations: When Reddit bans a user for spam: 'Your account u/username was permanently banned on [date] for violating Reddit's Content Policy Rule 2 (Spam). You posted identical promotional content to 50+ subreddits within 1 hour. This violates our policy against spam because [explanation]. Prior warnings were sent on [dates]. Appeal at [link].'
For Visibility Restrictions: When Twitter (X) limits tweet reach: 'We limited the visibility of your tweet [URL] on [date]. It will not appear in search or recommendations. The tweet contains [description] which our teams determined could be misleading regarding [topic] under our Civic Integrity Policy. This decision was made by human reviewers. You can appeal at [link].'
For Notice-Based Removals: When YouTube removes content after Article 16 notice: 'We removed your video [URL] on [date] after receiving a legal notice claiming copyright infringement. The notice stated: [summary]. We reviewed the notice and determined removal was required under EU copyright law. You can appeal if you believe this is error at [link]. You can also file DMCA counter-notice if you're the copyright owner.'
For Court Order Removals: When Facebook removes content per Article 9 order: 'We removed your post [URL] on [date] pursuant to court order from [Country] District Court (Case No. [number]). The court determined the content is illegal defamation under [law]. We must comply with lawful court orders. You may challenge the court order through legal proceedings. Appeals against our decision can be filed at [link] but will likely be denied while order stands.'
For Multiple Violations: When Instagram terminates account after multiple strikes: 'We permanently disabled your account on [date]. This follows previous violations: (1) [date] - nudity violation in post [URL]; (2) [date] - hate speech in comment [URL]; (3) [date] - harassment in DM. Under our strikes policy, 3+ violations within 90 days result in permanent ban. Each violation was reviewed by human moderators. You can appeal at [link] within 180 days.'
For Manifestly Illegal Content: When platform removes CSAM: 'We removed content from your account on [date] for violating laws against child sexual abuse material. This content is manifestly illegal and was reported to law enforcement. Your account has been terminated. No appeal is available for this violation type.' Brief, but still informs user of action and basic reasoning without republishing illegal content.
Durable Medium Examples: Acceptable formats include: email to user's registered address (permanent), downloadable PDF notice in account settings (saveable), archived decisions in account history (persistent). Unacceptable: temporary pop-up alert that disappears, non-downloadable on-screen message, time-limited notification that auto-deletes.
For Small Hosting Providers: A small web host suspending customer account: Email stating: 'We suspended your hosting account [ID] on [date] for hosting content violating our Terms Section 3.4 (illegal content). Your site [domain] contained files identified as malware. This was detected by our automated security scan and verified by our technical team. Under our terms, we must remove malware immediately. You can appeal at [email] or migrate your data after removing malware.'