Article 18

Notification of suspicions of criminal offences

1. Where a provider of hosting services becomes aware of any information giving rise to a suspicion that a criminal offence involving a threat to the life or safety of a person or persons has taken place, is taking place or is likely to take place, it shall promptly inform the law enforcement or judicial authorities of the Member State or Member States concerned of its suspicion and provide all relevant information available.

2. Where the provider of hosting services cannot identify with reasonable certainty the Member State concerned, it shall inform the law enforcement authorities of the Member State in which it is established or where its legal representative resides or is established, or inform Europol, or both.

3. For the purposes of this Article, the Member State concerned shall be the Member State where the suspected criminal offence has taken place or is likely to take place, or the Member State where the suspected offender is located or the victim is located.

Understanding This Article

Article 18 establishes a critical public safety obligation for hosting providers: when they encounter content or information suggesting serious criminal activity threatening life or safety, they must promptly alert law enforcement authorities. This creates a direct channel for platforms to escalate the most dangerous content beyond mere removal to active law enforcement intervention.

The scope is deliberately narrow - not ALL illegal content triggers reporting obligations, only suspected criminal offenses involving threats to life or safety. This includes terrorist plots, mass shooting threats, kidnapping plans, imminent violence coordination, credible suicide threats requiring intervention, and similar grave dangers.

The obligation arises when providers 'become aware' - through content moderation, user reports, automated detection, or any other means. Once providers have information giving rise to suspicion (not proof, just reasonable suspicion), the reporting duty activates. This encourages platforms to act on danger signals without requiring complete certainty.

'Promptly inform' means immediate or near-immediate reporting - minutes to hours, not days. When lives are at stake, delay can be catastrophic. Providers must treat these situations with the urgency they demand, providing all relevant available information including user identities, IP addresses, timestamps, content copies, and any context that might help authorities intervene.

Paragraph 2 addresses jurisdictional uncertainty. If it's unclear which Member State is affected (transnational threats, unclear locations), providers notify their own establishment country's authorities or Europol (the EU's law enforcement agency for serious cross-border crime), or both. This prevents inaction due to jurisdictional confusion.

Importantly, this obligation doesn't replace normal content removal duties. Providers should BOTH report to authorities AND take immediate action to remove/restrict dangerous content. Article 18 adds a law enforcement notification layer on top of existing moderation obligations.

Key Points

  • Hosting providers must promptly notify law enforcement of suspected serious crimes threatening life/safety
  • Notification required when offense has occurred, is occurring, or is likely to occur
  • Must provide all relevant available information to authorities
  • If Member State unclear, notify provider's establishment country or Europol
  • Applies to credible threats, planned violence, imminent attacks
  • Critical for preventing terrorism, mass violence, and life-threatening situations
  • Supplements content removal obligations with proactive law enforcement cooperation

Practical Application

For Terrorism Content: When Facebook discovers users coordinating a planned terrorist attack - sharing target locations, weapon acquisitions, attack timing - Facebook must immediately notify law enforcement in affected Member States. Facebook should provide: user identities, IP addresses, message logs, any intelligence about planned timing/location, and preserve evidence for investigation. Simultaneously, Facebook should remove the content and disable accounts.

For Mass Violence Threats: If YouTube identifies a video where someone credibly threatens to commit a school shooting, showing weapons and specific school locations, YouTube must promptly report to law enforcement in the Member State where the school is located. The report should include the video file, uploader information, timestamps, any comments providing additional context, and any linked accounts or associated content.

For Kidnapping Coordination: When Instagram moderators encounter posts suggesting active kidnapping or human trafficking (location sharing of victims, coordination between traffickers, evidence of coercion), Instagram must immediately alert authorities in the Member State where the suspected crime is occurring or where victims may be located, providing all available intelligence while preserving evidence.

For Imminent Suicide Intervention: If TikTok's systems identify content showing someone about to attempt suicide (live-stream from bridge, videos showing preparations, farewell messages with timing indications), TikTok should immediately notify emergency services and law enforcement in the relevant jurisdiction, providing location information, user identity, timing information, and any details enabling intervention. This goes beyond simple content removal - lives may be saveable through rapid response.

For Cross-Border Threats: When Twitter (X) discovers coordination of violence against EU Parliament facilities but can't determine which specific Member State's authorities have jurisdiction, Twitter notifies Belgian authorities (where EU Parliament is primarily located), Europol, and potentially multiple Member State coordinators. Better to over-report than under-report when lives are at stake.

For Marketplace Dangers: If a marketplace platform like Amazon discovers sellers offering bomb-making materials or weapons to buyers apparently planning attacks (based on message exchanges or suspicious patterns), Amazon must report to law enforcement, providing transaction records, buyer/seller identities, shipping addresses, communications, and any intelligence about intended use.

For Child Endangerment: When Reddit moderators identify content suggesting imminent child abuse - predators sharing plans to meet minors, coordination of abuse, or evidence of ongoing abuse situations - Reddit must immediately alert law enforcement and child protection authorities in affected Member States, providing full user information, content evidence, and any data helping identify victims or perpetrators.

For Europol Reporting: When Telegram identifies multi-country terrorist network coordination that spans multiple EU Member States, making single-country reporting insufficient, Telegram reports to Europol with comprehensive intelligence: network member identities, communication patterns, discussed plans, timelines, locations, and any indicators of operational readiness. Europol can coordinate international response.

Information to Provide: Complete reports should include: (1) specific content/evidence (copies, screenshots, URLs); (2) user/account information (names, IDs, IP addresses, locations); (3) timing (when content appeared, when discovered, activity timestamps); (4) context (related accounts, communication patterns, threat indicators); (5) preservation notice (evidence is preserved and available to authorities); (6) contact information for platform security team for follow-up.

For False Positives: If a platform reports a suspected threat that turns out to be misinterpreted content (fiction, roleplay, satire misunderstood as real threat), better to have reported and been wrong than failed to report a real danger. The 'suspicion' standard doesn't require certainty - reasonable grounds for concern suffice. Authorities can assess credibility; platforms should err on the side of reporting when life/safety might be at stake.

For Automated Detection: If YouTube's AI systems flag videos for potential terrorist content based on visual/audio analysis and human review confirms credible threats, YouTube reports to authorities. The automated detection doesn't alone trigger reporting - human assessment confirming genuine danger does. But automation can enable finding needles in haystacks, identifying dangerous content that would otherwise go unnoticed.

For Small Hosting Providers: A small blog hosting service discovering a customer's blog containing credible threats of violence must still report to law enforcement, despite limited resources. A simple report - 'We host a blog at [URL] for user [identity] containing what appear to be threats to commit violence at [location] on [date]. Here is the content: [copy]. We are preserving evidence and available for follow-up at [contact].' - satisfies the obligation.