Article 14

Terms and conditions

1. Providers of intermediary services shall include information on any restrictions that they impose in relation to the use of their service in respect of information provided by the recipients of the service, in their terms and conditions. That information shall include information on any policies, procedures, measures and tools used for the purpose of content moderation, including algorithmic decision-making and human review. It shall be set out in clear, plain, intelligible, user-friendly and unambiguous language, and shall be publicly available in an easily accessible format.

2. Providers shall act in a diligent, objective and proportionate manner in applying and enforcing the restrictions referred to in paragraph 1, with due regard to the rights and legitimate interests of all parties involved, including the fundamental rights of the recipients of the service, such as the freedom of expression, freedom and pluralism of the media, and other fundamental rights and freedoms as enshrined in the Charter.

3. Providers of intermediary services shall publish their terms and conditions in an easily accessible, readable and machine-readable format.

4. Providers shall notify recipients of the service of any significant changes to their terms and conditions.

Understanding This Article

Article 14 requires all intermediary services to maintain transparent, accessible terms and conditions that clearly explain what content and behavior is permitted on their services. This fundamental obligation ensures users understand the rules before using a service and can assess whether moderation decisions comply with stated policies.

The terms must comprehensively describe content moderation practices - both what's prohibited (hate speech, violence, illegal content) and how enforcement works (automated filters, human review, appeals processes). Users have a right to know if an algorithm might automatically remove their content or if humans will review reports.

Paragraph 2 establishes critical standards for enforcement: providers must apply their terms 'diligently' (carefully and thoroughly), 'objectively' (without bias or discrimination), and 'proportionately' (responses match violation severity). Importantly, enforcement must respect fundamental rights - particularly freedom of expression. Providers can't use terms to suppress lawful speech or silence legitimate criticism.

The machine-readable format requirement enables automated analysis, supporting researchers, regulators, and advocacy groups in monitoring platform policies. This transparency facilitates accountability and comparative analysis across platforms.

Notification of changes prevents 'gotcha' enforcement where users violate new rules they didn't know about. Significant changes - like prohibiting previously-allowed content or introducing new moderation tools - require advance notice, giving users time to adapt.

Key Points

  • All providers must have clear, accessible terms and conditions
  • Must explain content restrictions and moderation policies
  • Must describe algorithmic and human review processes
  • Terms must be written in clear, plain, user-friendly language
  • Must apply terms diligently, objectively, and proportionately
  • Must respect fundamental rights including freedom of expression
  • Must notify users of significant changes to terms

Practical Application

For Social Media Platforms: Facebook's Community Standards, Instagram's Community Guidelines, and TikTok's Community Guidelines must clearly explain what content is prohibited (violence, hate speech, nudity, etc.), how violations are detected (automated systems, user reports), how decisions are made (AI screening, human review), and what happens upon violation (content removal, account suspension, strikes system).

For Content Moderation Details: YouTube must disclose that it uses AI to automatically screen uploads for copyright infringement, child safety issues, and violent extremism, but that humans review appeals and borderline cases. The terms should explain how the AI works at a high level (hash matching, machine learning models) without revealing exploitable technical details.

For Clarity Requirements: Terms can't be vague 'we reserve the right to remove any content' statements. They must specifically identify prohibited categories. Instagram can't just say 'inappropriate content' - it must explain that nudity is generally prohibited except for breastfeeding, health contexts, and artistic expression, with clear examples.

For Proportionate Enforcement: If someone posts a single comment that marginally violates terms (e.g., mild profanity in an otherwise civil discussion), permanent account termination would be disproportionate. Warnings, temporary restrictions, or content removal alone would be more appropriate first responses. Platforms should maintain strike systems or graduated sanctions.

For Fundamental Rights Balance: A hosting provider receiving complaints that a political blog criticizes government officials can't remove the blog just because criticism is harsh. If the content isn't illegal and doesn't violate clearly-stated terms (no threats, no defamation, no misinformation), removal would violate freedom of expression. Terms must balance safety with rights.

For Change Notifications: When Twitter (X) changes its policy to allow/prohibit certain political content, it must notify all users before enforcement begins. Email notifications, prominent in-app banners, or announcements suffice. Users shouldn't discover new rules only when penalized.

For Machine Readability: Terms should be available as structured data (JSON, XML) or use semantic HTML markup, enabling researchers to automatically extract and compare moderation policies across platforms. This supports academic research on platform governance and regulatory oversight.

Practical Example: When Twitch updates its terms to prohibit gambling streams, it must: (1) write the new rule clearly in plain language; (2) explain how gambling streams will be detected; (3) describe penalties for violations; (4) notify streamers 30 days in advance; (5) apply the rule consistently to all streamers; (6) consider proportionate enforcement (warnings before bans); and (7) ensure the rule doesn't disproportionately impact fundamental rights.