Providers of intermediary services shall not be deemed ineligible for the exemptions from liability referred to in Articles 4, 5 and 6 solely because they carry out voluntary own-initiative investigations into, or take other measures aimed at detecting, identifying and removing, or disabling of access to, illegal content, or take the necessary measures to comply with the requirements of Union law and national law in compliance with Union law, including the requirements set out in this Regulation.
Voluntary own-initiative investigations and legal compliance
Understanding This Article
Article 7 of the Digital Services Act addresses one of the most critical challenges in online platform regulation: the 'Good Samaritan' problem. This provision establishes that intermediary service providers shall not lose their liability exemptions under Articles 4, 5, and 6 simply because they voluntarily undertake investigations, implement content moderation measures, or take proactive steps to detect, identify, remove, or disable access to illegal content.
The historical context of this provision is essential. Under the e-Commerce Directive 2000/31/EC, particularly Article 14, there existed significant legal uncertainty about whether platforms engaging in content moderation would lose their 'safe harbour' protections. Providers feared that actively monitoring content or removing illegal material might be interpreted as having 'actual knowledge' of all content, thereby eliminating liability exemptions and exposing them to unlimited liability for user-generated content. This created a perverse incentive: platforms that did nothing to combat illegal content maintained stronger legal protections than those trying to make their services safer.
Article 7 resolves this by explicitly codifying the 'good faith' principle. According to Recital 26 of the DSA, acting in good faith requires objectivity, non-discrimination, proportionality, due regard for the rights and legitimate interests of all parties involved, and providing necessary safeguards against unjustified removal of legal content. This means platforms can deploy automated content detection systems (like Microsoft's PhotoDNA for CSAM detection or hash-matching technologies), maintain human moderation teams reviewing millions of posts, implement AI-based filters for hate speech or terrorist content, and conduct investigations into potentially illegal activity - all without jeopardizing their fundamental liability protections.
The provision encompasses both voluntary investigations and measures taken to comply with legal requirements. This dual protection is crucial: it covers not only entirely discretionary safety measures but also steps taken to fulfill obligations under other EU or national laws (like the Terrorist Content Online Regulation or national laws requiring CSAM reporting). The article explicitly states that compliance with 'the requirements of Union law and national law in compliance with Union law, including the requirements set out in this Regulation' does not trigger loss of exemptions.
Critically, Article 7 works in tandem with Article 8's prohibition on general monitoring obligations. While Article 7 says platforms can voluntarily monitor without losing protections, Article 8 ensures they cannot be forced to implement general monitoring systems. This balance preserves platform autonomy and innovation in safety technologies while preventing mandated surveillance that would undermine fundamental rights to privacy and freedom of expression.
The relationship to CJEU case law is significant. In YouTube and Cyando (C-682/18 and C-683/18), the Court held that implementing technological measures to detect infringing content does not mean a provider plays an active role giving it knowledge and control over all content. Article 7 codifies and extends this principle, providing statutory certainty that voluntary detection measures are protected activities rather than liability-triggering conduct.
This provision incentivizes responsible platform behavior by removing legal barriers to safety innovation. Platforms can experiment with new content moderation technologies, invest in trust and safety teams, develop sophisticated detection algorithms, and implement proactive measures without fear that doing so will fundamentally alter their legal status. The result is a regulatory framework that encourages rather than penalizes efforts to combat illegal content, benefiting users, society, and platforms themselves by making online spaces safer while preserving the liability framework that enables digital services to function at scale.
Key Points
- Providers can proactively detect, investigate, and remove illegal content without losing liability exemptions under Articles 4, 5, and 6
- Voluntary content moderation measures (automated or human) don't create general liability for all platform content
- Good faith requirement includes objectivity, non-discrimination, proportionality, and due regard for user rights (Recital 26)
- Protects both voluntary safety measures and actions taken to comply with legal obligations under EU or national law
- Enables platforms to deploy AI content filters, hash-matching systems, and automated detection without legal risk
- Works together with Article 8's prohibition on general monitoring - platforms can voluntarily monitor but can't be forced to
- Codifies CJEU case law principles from YouTube and Cyando regarding technological detection measures
- Incentivizes platform safety innovation by removing legal barriers to proactive content moderation
- Explicitly covers compliance measures for other regulations (like Terrorist Content Online Regulation)
- Transparency about content moderation activities doesn't constitute admission of knowledge triggering liability
Practical Application
For Content Moderation Systems: Facebook can implement comprehensive AI-driven content moderation systems analyzing billions of posts for potential hate speech violations without these systems creating general liability for all user content. When Facebook's algorithms flag and remove a hate speech post, this voluntary action doesn't mean Facebook has 'actual knowledge' of every other post, maintaining Article 6 hosting protections. Similarly, Instagram's deployment of PhotoDNA technology to detect and report child sexual abuse material (CSAM) is explicitly protected - this proactive measure demonstrates responsible platform operation rather than assumption of liability for all images uploaded.
For Human Moderation Teams: YouTube employing thousands of content moderators who review flagged videos, make removal decisions, and enforce community standards does not create general liability for the billions of hours of video on the platform. These moderators can review content in good faith, make judgment calls about policy violations, and remove content they believe violates terms of service without each removal decision being interpreted as evidence that YouTube exercises control over all content requiring knowledge-based liability.
For Automated Detection Tools: TikTok can implement automated systems detecting dangerous viral challenges (like the 'Blackout Challenge' that led to child deaths) without these detection systems eliminating hosting exemptions. The platform can proactively identify and remove videos showing dangerous activities based on algorithmic analysis. If the system misses some instances, TikTok doesn't automatically become liable for those it failed to detect - the voluntary effort to detect dangerous content is protected by Article 7, even if imperfect.
For Copyright Filtering Systems: YouTube's Content ID system, which scans uploaded videos against a database of copyrighted works provided by rights holders, represents exactly the type of voluntary proactive measure Article 7 protects. Content ID proactively identifies potential copyright infringement before videos are published, offers rights holders options to block, monetize, or track content, and processes hundreds of thousands of videos daily. This sophisticated voluntary system doesn't mean YouTube has 'actual knowledge' of every copyrighted work requiring immediate removal - the system is a good faith effort to combat infringement while maintaining Article 6 protections. When Content ID identifies a match, YouTube acts on that specific information, but doesn't become liable for every copyrighted work the system doesn't detect.
For Compliance Obligations: When platforms implement measures to comply with the Terrorist Content Online Regulation (TCO Regulation), these compliance activities don't trigger loss of DSA liability exemptions. A hosting provider required by competent authorities under the TCO Regulation to implement specific proactive measures against terrorist content retains its Article 6 protections for other content types. Article 7 explicitly protects 'measures to comply with the requirements of Union law,' ensuring compliance with one regulation doesn't inadvertently eliminate protections under another.
For Transparency and Safety Reporting: Platforms publishing transparency reports detailing content removal statistics (e.g., 'We removed 2 million hate speech posts this quarter, 95% detected proactively by automated systems') don't inadvertently admit liability-triggering knowledge through such transparency. These reports demonstrate compliance with Article 15 DSA transparency requirements and showcase responsible platform operation. Article 7 ensures this transparency doesn't backfire legally by being construed as admission of knowledge requiring liability for all similar content.
For Innovation in Safety Technologies: Startups can develop and deploy novel safety technologies without fear of liability consequences. A new social media platform implementing experimental AI systems for detecting harassment patterns, coordinated inauthentic behavior, or misinformation campaigns can iterate, test, and refine these systems. If version 1.0 has a high false positive rate, the platform can improve it without each error becoming evidence of negligent content management. Article 7's protection of good faith efforts enables technological innovation in platform safety.
Real-World Example - Reddit's Approach: Reddit maintains both automated systems and community-driven moderation (subreddit moderators). Reddit's automated systems flag potential policy violations (spam, brigading, ban evasion), while volunteer moderators govern individual communities. Reddit can implement site-wide automated detection for illegal content (CSAM, violent extremism) while community moderators handle localized content moderation. All these voluntary measures - from automated spam detection to moderator removal of rule-violating posts - are protected by Article 7. Reddit doesn't become liable for every post simply because it facilitates extensive content moderation; instead, these efforts demonstrate responsible platform operation while maintaining hosting protections for user-generated content it doesn't specifically know violates the law.