Chapter 3|Due Diligence Obligations - Hosting Services|📖 7 min read
1. Providers of hosting services shall provide mechanisms for any individual or entity to notify them of the presence on their service of specific items of information that the individual or entity considers to be illegal content. Those mechanisms shall be easy to access and user-friendly, and shall allow for the submission of notices exclusively by electronic means.
2. The mechanisms referred to in paragraph 1 shall be such as to facilitate the submission of sufficiently precise and adequately substantiated notices. To that end, the providers of hosting services shall take the necessary measures to enable and facilitate the submission of notices containing all of the following elements:
(a) a sufficiently substantiated explanation of the reasons why the individual or entity alleges the information in question to be illegal content;
(b) a clear indication of the exact electronic location of that information, such as the exact URL or URLs, and, where necessary, additional information enabling the identification of the illegal content adapted to the type of content and to the specific type of hosting service;
(c) the name and email address of the individual or entity submitting the notice, except in the case of information considered to involve one of the offences referred to in Articles 3 to 7 of Directive 2011/93/EU;
(d) a statement confirming the bona fide belief of the individual or entity submitting the notice that the information and allegations contained therein are accurate and complete.
3. Notices referred to in this Article shall be considered to give rise to actual knowledge or awareness for the purposes of Article 6 where they allow a diligent hosting service provider to identify the illegality of the relevant content without a detailed legal examination.
4. Where a notice contains the notifier's electronic contact information, providers shall, without undue delay, send a confirmation of receipt of the notice to that contact information, and inform the notifier of their decision in respect of the information to which the notice relates.
5. Providers shall process notices and take decisions in respect of the notified content in a timely, diligent, non-arbitrary and objective manner. Where they use automated means for that processing or decision-making, they shall include information on such use in the notification of their decision.
6. Providers shall make information on the functioning of the mechanisms referred to in paragraph 1, including information on safeguards against misuse, publicly available and easily accessible.
Understanding This Article
Article 16 is arguably THE most important operational provision in the DSA. It establishes the standardized 'notice-and-action' mechanism that enables anyone - users, organizations, rights holders, NGOs - to report illegal content to hosting providers and obligates providers to respond systematically.
This replaces the fragmented, platform-specific reporting systems that previously existed, where each service had different processes, requirements, and responsiveness levels. Now ALL hosting services - from social media giants to small blog hosts - must implement compliant notice mechanisms meeting minimum standards.
The notice requirements balance accessibility with quality. Mechanisms must be 'easy to access and user-friendly' - no hidden forms or complex processes. But notices must also be substantiated - explaining WHY content is illegal, providing exact locations (URLs), and including reporter identification. This prevents frivolous mass reporting while enabling legitimate complaints.
Paragraph 3 contains critical language: properly-formatted notices create 'actual knowledge' for Article 6 purposes. This means hosting providers receiving valid Article 16 notices MUST act expeditiously to remove illegal content or lose their liability exemption. The notice triggers the Article 6 duty to act.
However, not ALL notices create actual knowledge - only those enabling 'a diligent provider to identify the illegality without a detailed legal examination.' If a notice merely says 'this is illegal' without explaining why, or alleges complex legal violations requiring expert analysis, it may not trigger actual knowledge. The illegality must be apparent to a reasonable provider.
Providers must respond to notices transparently - acknowledging receipt, informing reporters of decisions, explaining the reasoning. If automated tools process notices, this must be disclosed. This prevents 'black hole' reporting where notices vanish without response.
The CSAM exception (paragraph 2(c)) recognizes that child safety reporters often want anonymity. Notices about child sexual abuse material don't require identification, protecting reporters from retaliation while enabling urgent action against the most serious illegal content.
Key Points
ALL hosting services must provide easy-to-use mechanisms for reporting illegal content
Notices must include substantiated explanation of why content is illegal
Notices must provide exact URLs or precise location information
Notices must include reporter's name and email (except for CSAM)
Providers must acknowledge receipt and inform reporters of decisions
Processing must be timely, diligent, non-arbitrary, and objective
This is THE fundamental mechanism for addressing illegal content online
Practical Application
For Platform Implementation: Facebook must implement a clear 'Report' button on every post, photo, video, and comment. Clicking opens a form with dropdown menus for violation types (hate speech, violence, sexual content, illegal products, etc.), a text box for explanation, automatic capture of the content's URL, and fields for reporter name/email. The form guides users through requirements without legal expertise needed.
For Valid Notices: A valid notice reporting copyright infringement on YouTube might state: 'This video at youtube.com/watch?v=ABC123 contains my copyrighted song "Title" from album "Name" without permission. I own copyright (registration #12345). The entire audio track from 0:00-3:30 is my work. I request removal.' This gives YouTube enough information to assess illegality without legal analysis - actual knowledge created.
For Invalid Notices: A notice stating 'The blog post at example.com/post is illegal because it violates my rights' lacks substantiation. WHAT rights? WHY is it illegal? What specific law? This doesn't create actual knowledge - the provider would need legal analysis to determine illegality. The provider can reject this notice as insufficiently substantiated.
For Automated Processing: Instagram might use AI to pre-screen notices, flagging those about CSAM for immediate human review and automatic content blocking pending review, while routing copyright notices to a separate queue. When sending decisions, Instagram must disclose: 'This notice was initially processed by automated systems and reviewed by our trust and safety team.'
For Response Obligations: Within 24-48 hours of receiving a notice, providers should send: 'We received your notice about [content]. We're reviewing it and will inform you of our decision.' Then after review: 'We've reviewed your notice. Decision: Content removed because it violates EU hate speech laws. The content was removed on [date].' Or: 'Content remains available because we determined it doesn't violate laws or our terms based on [reasoning].'
For CSAM Reporting: When someone reports child sexual abuse material on Twitter (X), they can submit anonymously without providing name/email. Twitter must treat these with utmost urgency - immediate human review, immediate removal if confirmed, reporting to NCMEC/law enforcement per legal obligations. The anonymity protects reporters while enabling action.
For Trademark Notices: A brand noticing counterfeit products on Amazon submits Article 16 notice: 'Product at amazon.com/dp/PRODUCT uses our registered trademark [mark] without authorization to sell counterfeit goods. We own EU trademark registration #EU123456. The listing title, images, and description all use our mark. This is trademark infringement under EU law.' Amazon must review and act - either removing listing or explaining why it's not infringing.
For Defamation Claims: Someone claiming a blog post defames them must explain: what statements are false, why they're false, why statements harm reputation, which defamation law applies, and why no defense (truth, opinion, public interest) applies. If the explanation enables the hosting provider to recognize clear defamation (provably false statements of fact causing harm), actual knowledge arises. If it's debatable whether statements are opinion vs fact or whether public interest defense applies, the provider may need more analysis.
For Frivolous Reporting: If someone submits 1,000 notices claiming every critical review of their business is 'illegal defamation' without substantiation, providers can implement safeguards against misuse per paragraph 6 - rate limiting, requiring more detailed explanations from repeat reporters, or temporarily suspending reporting privileges for abuse. This protects against weaponized reporting.
For Small Hosting Providers: A small web hosting company must still implement compliant mechanisms. They might create a simple email form: 'Report illegal content: (1) URL of content: ___; (2) Why is it illegal?: ___; (3) Your name: ___; (4) Your email: ___; (5) I confirm this information is accurate: [ ].' They send templated acknowledgments and decisions. Even simple implementation satisfies Article 16.
For Cross-border Issues: A German user reports content hosted by US-based Reddit as violating German law prohibiting Holocaust denial. Reddit must assess whether the content violates EU law (it does - Germany is an EU Member State). If Reddit determines the content does violate German law and the notice is properly formatted, Reddit gains actual knowledge and must remove content for German users at minimum, possibly EU-wide depending on the legal analysis.