No general monitoring or active fact-finding obligations
Chapter 2|Liability of Providers of Intermediary Services|📖 8 min read
1. No general obligation to monitor the information which providers of intermediary services transmit or store, nor actively to seek facts or circumstances indicating illegal activity shall be imposed on those providers.
2. Paragraph 1 shall not affect obligations to monitor or obligations to seek facts or circumstances indicating illegal activity, with regard to a specific item of information that providers of intermediary services may be subject to under Union law or national law in compliance with Union law, or orders issued by national authorities in accordance with such law.
3. Paragraph 1 shall be without prejudice to the possibility for providers of intermediary services to voluntarily carry out activities aimed at detecting, identifying and removing, or disabling of access to, illegal content.
Understanding This Article
Article 8 of the Digital Services Act establishes one of the regulation's most fundamental and constitutionally significant principles: the prohibition on general monitoring obligations. This provision explicitly states that no general obligation to monitor information transmitted or stored by intermediary services, nor any obligation to actively seek facts or circumstances indicating illegal activity across all content, shall be imposed on providers. This principle represents a continuation and strengthening of Article 15 of the e-Commerce Directive 2000/31/EC, now codified with enhanced clarity in the DSA framework.
The constitutional and practical importance of this prohibition cannot be overstated. General monitoring - the requirement that platforms systematically scan and analyze all user content for potential illegality - would create several fundamental problems. First, it would impose technologically burdensome and economically prohibitive requirements, particularly on smaller providers who lack the resources for comprehensive content scanning infrastructure. Second, it would create severe chilling effects on freedom of expression, as users would know every communication is subjected to automated surveillance and analysis. Third, it would conflict with fundamental rights to privacy and data protection, as comprehensive monitoring necessarily involves processing vast amounts of personal data. Fourth, it would undermine the very architecture of the internet by requiring intermediaries to act as private enforcement mechanisms for all areas of law.
The CJEU has repeatedly affirmed these principles in landmark cases. In Scarlet Extended v. SABAM (C-70/10, 2011), the Court held that requiring an ISP to install a filtering system monitoring all electronic communications would violate Article 15(1) of the e-Commerce Directive. The Court emphasized that such a system would require 'active observation of all the electronic communications' and would apply indiscriminately to all customers, all content, and all time periods. The Court further noted that general monitoring would fail to strike a fair balance between intellectual property rights, freedom to conduct business, freedom of expression, and data protection rights.
Similarly, in SABAM v. Netlog (C-360/10, 2012), the Court held that requiring a social networking platform to install a filtering system monitoring all user activity would constitute prohibited general monitoring. These cases established that while specific, targeted measures against identified illegal content are permissible, blanket surveillance systems are not. The DSA's Article 8 codifies these principles while providing enhanced clarity on the distinction between prohibited general monitoring and permissible specific measures.
Critically, Article 8(2) clarifies that the prohibition on general monitoring does not affect specific monitoring obligations regarding particular items of information. This distinction is essential: authorities can require platforms to monitor for and prevent the reappearance of specific identified illegal content (for example, a specific defamatory image), but cannot require platforms to scan all content for potential illegality of any type. The specificity requirement serves as a constitutional safeguard, ensuring monitoring obligations are narrowly tailored to address identified harms rather than implementing generalized surveillance.
Article 8(3) reinforces the relationship with Article 7 by confirming that the prohibition on mandatory general monitoring is without prejudice to providers' ability to voluntarily carry out detection and removal activities. This creates a crucial balance: platforms cannot be forced to monitor everything, but they can choose to implement safety measures. This preserves provider autonomy while preventing the imposition of surveillance obligations that would fundamentally alter the internet's architecture.
The prohibition applies across all intermediary service categories - mere conduit (Article 4), caching (Article 5), and hosting (Article 6). Even services that primarily facilitate communication without actively hosting content cannot be required to implement general monitoring of data transmission. This ensures the principle protects the entire internet intermediary ecosystem, from ISPs to hosting providers to social media platforms.
Recital 27 emphasizes that the prohibition should be interpreted broadly to prevent circumvention. Orders from judicial or administrative authorities must comply with this prohibition, meaning courts cannot use their enforcement powers to effectively impose general monitoring through specific-sounding orders that functionally require scanning all content. The Commission's 2023 guidance on Articles 8 and 9 emphasizes that orders must identify specific content with precision (exact URLs, hash values, specific identifiers) rather than describing categories of illegal content that would require general scanning to identify.
Key Points
No general obligation to monitor all information intermediary services transmit or store
No obligation to actively seek facts or circumstances indicating illegal activity across all content
Continues and strengthens Article 15 of e-Commerce Directive 2000/31/EC
Protects fundamental rights: freedom of expression, privacy, data protection, and freedom to conduct business
CJEU cases Scarlet Extended (C-70/10) and SABAM v. Netlog (C-360/10) established constitutional basis
Specific monitoring obligations for particular identified content remain permissible (Article 8(2))
Distinction: general monitoring of all content = prohibited; specific measures for identified content = allowed
Applies to all intermediary service categories: mere conduit, caching, and hosting
Voluntary monitoring and content moderation remain explicitly permitted (Article 8(3) and Article 7)
Court orders must comply with prohibition - cannot circumvent through specific-sounding but functionally general orders
Prevents technological and economic burdens that would be prohibitive especially for smaller providers
2024 Swedish Administrative Court case confirmed no-general-monitoring principle remains enforceable
Practical Application
For Upload Filter Debates: During the Copyright Directive (Article 17) debates, significant controversy surrounded whether requiring platforms to implement upload filters would violate the prohibition on general monitoring. The final Article 17(8) explicitly states that obligations to make best efforts to ensure unavailability of copyright infringing works do not constitute general monitoring. However, this exception is narrow and specific to copyright; it does not authorize general content filtering for all types of illegal content. Under the DSA, Member States cannot require platforms to implement upload filters scanning all content for hate speech, defamation, privacy violations, or other illegality categories - such requirements would violate Article 8.
For Copyright Enforcement: Rights holders cannot demand that YouTube scan all uploaded videos against all copyrighted works in existence to prevent infringement. This would constitute general monitoring. Instead, rights holders can use notice-and-takedown procedures for specific works, participate in voluntary systems like Content ID (which YouTube offers but isn't legally required to provide), or obtain court orders requiring YouTube to prevent re-upload of specific identified infringing works. The distinction: requiring scanning of all uploads against all copyrights = prohibited general monitoring; requiring prevention of specific work's re-upload after identification = permissible specific monitoring.
For Court Orders Against Specific Content: A German court determines a specific Facebook post contains illegal defamation. The court can order Facebook to: (1) remove that specific post; (2) prevent the exact same content from being re-uploaded by the same or other users; (3) implement reasonable measures to detect and block re-uploads of identical or equivalent content (same text, same image, etc.). What the court cannot order: (4) scan all Facebook posts for potentially defamatory content generally; (5) implement AI systems evaluating all posts for defamation risk; (6) proactively identify and remove all potentially illegal defamatory speech. Orders 1-3 are specific monitoring (permitted); orders 4-6 are general monitoring (prohibited).
For National Law Enforcement Requests: French authorities identify a terrorist propaganda video spreading online. They can issue orders requiring platforms that hosted the specific video to remove it and prevent its re-upload (using video hash matching or equivalent). They cannot require all platforms to implement systems scanning all uploaded videos for potential terrorist content. The former is a specific measure tied to identified illegal content; the latter is general monitoring. However, under the Terrorist Content Online Regulation (separate from the DSA), one-hour removal obligations for terrorist content identified by authorities apply - but this is specific content identification by authorities, not a requirement for platforms to proactively scan everything.
For Platform Trust and Safety Operations: Twitter operates trust and safety teams that review reported content, investigate coordinated inauthentic behavior, and enforce platform rules. Article 8 doesn't prohibit these voluntary activities (protected by Article 7). However, if a regulator ordered Twitter to 'implement systems ensuring no illegal content ever appears,' this would violate Article 8. The distinction: Twitter choosing to moderate = legal and encouraged; requiring Twitter to guarantee prevention of all illegal content through comprehensive monitoring = prohibited general monitoring obligation.
For ISP Network Monitoring: Internet Service Providers (ISPs) face requests from law enforcement and rights holders to monitor network traffic for illegal file sharing, CSAM distribution, or other criminal activity. Article 8 prohibits imposing general obligations requiring ISPs to scan all network traffic. However, ISPs can be required to respond to specific information requests (Article 10), disclose subscriber information for identified illegal activity, and block access to specific illegal websites or content when ordered by competent authorities. The prohibition prevents 'scan everything' obligations while preserving targeted law enforcement capabilities.
For Regulatory Pressure and Voluntary Measures: Sometimes regulators apply pressure for platforms to 'do more' about illegal content without formally imposing obligations. For example, a DSC might strongly suggest platforms should implement proactive detection for CSAM or terrorist content. Platforms often comply voluntarily to demonstrate responsibility. Article 8 ensures such measures remain voluntary - platforms can implement them (protected by Article 7) but cannot be legally required to do so through regulatory coercion disguised as guidance. If voluntary measures become de facto mandatory through regulatory pressure, this effectively circumvents Article 8 and could be challenged.
For Cross-Border Complexity: A hosting provider operates across all EU Member States. Different Member States might have varying illegal content laws (defamation standards, hate speech definitions, historical denial laws). Article 8 prevents any Member State from requiring the provider to implement general monitoring ensuring compliance with all potentially applicable laws across all jurisdictions. Such a requirement would be technologically impossible and legally impermissible. Instead, authorities must identify specific illegal content and issue specific orders, which may have territorial scope limitations based on applicable law and fundamental rights considerations.
Real-World Example - Swedish Medical Products Case (2024): In one of the first DSA-based court decisions, Sweden's Administrative Court in Uppsala examined whether national medical products regulators imposed a general monitoring obligation on online intermediaries by requiring them to prevent illegal pharmaceutical sales. The court held there was no general monitoring obligation because the requirements were specific to identified illegal pharmaceutical products rather than requiring platforms to scan all listings for potential pharmaceutical law violations. This case illustrates Article 8's practical application: specific measures against identified violations are permissible; requirements to proactively scan everything for potential violations are not.