Article 25

Online interface design and organisation

1. Providers of online platforms shall not design, organise or operate their online interfaces in a way that deceives or manipulates the recipients of their service or in a way that otherwise materially distorts or impairs the ability of the recipients of their service to make free and informed decisions.

2. The prohibition in paragraph 1 shall not apply to practices covered by Directive 2005/29/EC or Regulation (EU) 2016/679.

Understanding This Article

Article 25 is THE DSA's general prohibition on 'dark patterns' - manipulative interface design practices that trick, coerce, or manipulate users into making decisions against their interests or without informed consent. This represents a fundamental shift: interface design is no longer purely a business decision but becomes subject to regulatory standards protecting user autonomy.

'Dark patterns' is the common term for design practices that exploit human psychology and behavioral biases to manipulate user behavior. Examples include hiding opt-out buttons, using confusing language, creating false urgency, making cancellation extremely difficult, or pre-selecting expensive options. Article 25 makes these practices illegal when they deceive/manipulate users or materially distort decision-making.

The prohibition covers three prohibited interface approaches: (1) designs that deceive users (making users believe something false about options, consequences, or choices); (2) designs that manipulate users (exploiting psychological vulnerabilities or behavioral biases to steer choices); (3) designs that materially distort or impair free and informed decision-making (even without clear deception/manipulation, if the interface undermines genuine choice).

'Materially distorts or impairs' is a crucial qualifier - minor design choices don't violate Article 25. The interface must SUBSTANTIALLY affect users' ability to make autonomous decisions. This prevents over-regulation of every design choice while catching genuinely manipulative practices.

'Free and informed decisions' is the core value protected. Users must be able to make choices without coercion, deception, or manipulation, with genuine understanding of what they're choosing. If an interface prevents users from making such decisions - hiding information, obscuring choices, exploiting psychological biases - it violates Article 25.

Paragraph 2 creates a carve-out for practices already covered by other EU laws - specifically the Unfair Commercial Practices Directive (UCPD) and GDPR. This prevents overlapping enforcement. If a dark pattern falls under consumer protection law or data protection law, those regimes apply instead of Article 25. However, platform-specific dark patterns not covered by those laws (like interface designs affecting content consumption, account management, or platform usage) fall under Article 25.

Unlike specific prohibitions (Articles 26-28 ban particular practices), Article 25 is a broad, principles-based rule. It doesn't enumerate every prohibited practice - it establishes a general standard against which any interface design can be judged. This flexibility enables enforcement against novel manipulative practices as they emerge, without need for legislative updates.

Key Points

  • Platforms cannot design interfaces that deceive or manipulate users
  • Cannot impair users' ability to make free and informed decisions
  • Covers all aspects of interface design, organization, and operation
  • THE general anti-dark patterns rule for online platforms
  • Practices already covered by UCPD or GDPR are excluded (to avoid overlap)
  • Commission can issue guidance on specific manipulative practices
  • Enables enforcement against wide range of user-hostile design choices
  • Protects user autonomy and informed decision-making

Practical Application

For Subscription Cancellation: A streaming platform makes subscribing a single click ('Start Free Trial') but makes canceling require: navigating through 5 menu levels, answering a questionnaire, dismissing 3 persuasive pop-ups trying to retain you, and finally finding a tiny 'confirm cancellation' link in light gray text. This violates Article 25 - the interface is designed to materially impair free decision-making by making cancellation unreasonably difficult compared to subscription. Compliant design: cancel button equally prominent and accessible as subscribe button.

For Cookie Consent: A website presents cookie consent with huge green 'Accept All' button and tiny gray 'Manage Preferences' link buried at bottom. When users click 'Manage,' they face complex matrix of toggles with pre-selected opt-ins and confusing categories. Rejecting all cookies requires 10+ clicks while accepting all is 1 click. This manipulates users toward accepting cookies. However, paragraph 2 excludes this - GDPR already regulates cookie consent, so Article 25 doesn't apply. Enforcement happens under GDPR's consent requirements instead.

For Auto-Renewal: When users sign up for Amazon Prime free trial, Amazon pre-selects annual subscription ($139/year) over monthly ($14.99/month), presents the annual option more prominently with 'Best Value!' badge, and uses smaller, less visible text for monthly option. While arguably manipulative, this might not meet 'materially distorts' threshold if both options are reasonably visible and users can freely choose. However, if Amazon hides that it's auto-renewing after trial, that's clear deception violating Article 25.

For Account Deletion: LinkedIn requires users to: navigate through Settings > Account Preferences > Account > Close Account > scroll past 8 screens explaining benefits of keeping account > answer survey > verify email > confirm deletion. This multi-step obstacle course materially impairs users' free decision to delete accounts. Compliant design: 'Delete Account' option reasonably accessible in account settings, with single confirmation step and clear explanation of consequences.

For Pre-Selected Options: TikTok's app installation pre-selects: 'Allow access to contacts,' 'Enable push notifications,' 'Share usage data for personalization.' Users must actively uncheck each to decline. While this nudges toward data sharing, is it 'manipulation' or just default settings? If TikTok makes unchecking difficult (tiny checkboxes, confusing labels), it likely violates Article 25. If defaults are clearly presented with easy opt-out, it's borderline - users can freely choose but are nudged toward sharing.

For False Urgency: Booking.com shows: 'Only 1 room left!' 'Booked 47 times today!' '3 people looking right now!' These create urgency pressuring users to book quickly without fully comparing options. If the claims are false (there are actually 10 rooms, or the 'people looking' is fabricated), this is clear deception violating Article 25. If true but presented to manipulate decision-making, it's closer call - truthful information presented in manipulative way.

For Confusing Language: When Instagram users try to make accounts private, the setting says: 'Allow public engagement to maximize reach and connections' (enabled by default) vs 'Limit profile to approved followers only' (disabled). The language frames private accounts negatively. If this framing materially impairs informed choice - users don't understand they're choosing public profile - it violates Article 25. Compliant: neutral language like 'Public Profile' vs 'Private Profile' with clear explanations.

For Forced Continuity: A gaming platform offers 'free coins' that automatically subscribe users to $9.99/month service after 3 days, disclosed only in tiny text users must scroll to see. This deceives users who think they're just getting free coins, not subscribing. Clear Article 25 violation - interface is designed to trick users into subscription they don't understand they're accepting.

For Hidden Costs: Ticketmaster shows concert tickets for '$50' prominently, but service fees, processing fees, and facility fees (adding $25) only appear at final checkout step. Users invest time selecting tickets, enter payment info, then discover higher price - creating pressure to proceed despite higher cost. This materially distorts decision-making by hiding true cost until users are psychologically committed. Compliant: display full price including all fees upfront.

For Disguised Ads: Facebook shows sponsored posts that look identical to organic posts from friends, with only tiny 'Sponsored' label easily missed. Users believe they're seeing friend recommendations when they're actually seeing paid advertisements, deceiving users about content nature. Article 25 violation - though Article 26 also specifically addresses ad transparency, Article 25 provides backup prohibition on deceptive presentation.

For Confirmshaming: When users try to unsubscribe from newsletter, site presents: [Big button] 'Keep me subscribed to amazing offers!' [Tiny link] 'No thanks, I hate saving money.' This manipulates users through guilt/shame. If it materially impairs free choice - users click 'keep subscribed' not because they want emails but to avoid negative self-perception - it violates Article 25. Compliant: neutral options like 'Subscribe' / 'Unsubscribe' without emotional manipulation.

For Visual Hierarchy Manipulation: Airbnb's cancellation policy selection presents: ['Flexible' in large, colorful box with checkmark] ['Moderate' in medium gray box] ['Strict' in small, plain text]. The visual hierarchy manipulates hosts toward 'Flexible' even if 'Strict' better serves their interests. If this materially impairs hosts' informed choice about cancellation policy, it's Article 25 violation. Compliant: present all options with equal visual prominence, letting hosts choose based on substance not design manipulation.

For Commission Guidance: The Commission might issue guidance clarifying: '(1) Subscription and cancellation must have similar complexity and accessibility; (2) Default selections of paid options require explicit user confirmation; (3) True information can still violate Article 25 if presented manipulatively; (4) Visual design giving undue prominence to certain choices may constitute manipulation.' This guidance helps platforms understand compliance and enables consistent enforcement.