Article 39

Online interface design and organisation

Providers of very large online platforms and of very large online search engines shall not design, organise or operate their online interfaces in a way that deceives or manipulates the recipients of their service or in a way that otherwise materially distorts or impairs the ability of the recipients of their service to make free and informed decisions.

The conduct referred to in the first paragraph shall include, but not be limited to: (a) giving more prominence to certain choices when asking the recipient of the service for a decision; (b) repeatedly requesting that the recipient of the service make a choice where that choice has already been made, particularly by presenting pop-ups that interfere with the user experience; (c) making the procedure for terminating a service more difficult than subscribing to it.

Understanding This Article

Article 39 addresses 'dark patterns' - deceptive or manipulative interface design practices that undermine user autonomy and informed decision-making. The provision recognizes that interface design choices powerfully shape user behavior: defaults, visual hierarchy, interaction flows, language, and choice architecture can nudge, push, or coerce users into decisions misaligned with their interests. While some design influence is legitimate (and inevitable), Article 39 prohibits crossing the line into deception or manipulation that 'materially distorts or impairs' users' decision-making ability.

The prohibition operates at three levels: (1) interfaces must not 'deceive' users - creating false impressions, hiding information, misrepresenting choices; (2) interfaces must not 'manipulate' users - exploiting psychological vulnerabilities, creating artificial pressure, leveraging emotional triggers to override rational judgment; (3) interfaces must not 'materially distort or impair' users' ability to make 'free and informed decisions' - even without outright deception or manipulation, interfaces cannot be designed to substantially undermine user autonomy. This three-tier structure captures spectrum of problematic practices from blatant lies to subtler autonomy violations.

The specific examples in paragraph 2 illustrate common dark patterns: (a) 'Giving more prominence to certain choices' addresses visual hierarchy manipulation - when platforms present privacy-invasive option with large prominent button while privacy-protective option appears as small grey text link, design architecture steers users toward platform-preferred choice regardless of users' actual preferences. The problem isn't offering different choices, but designing presentation to artificially advantage one option. (b) 'Repeatedly requesting choice already made' targets nagging patterns - when user declines personalized ads but platform shows popup requesting ad consent every session, persistence wears down user resistance. This is particularly problematic when popups 'interfere with user experience' (blocking content, disrupting tasks), creating artificial cost to maintaining user's preferred choice. (c) 'Making termination more difficult than subscription' prohibits asymmetric friction - if subscribing requires two clicks but canceling requires: finding obscure cancellation page, navigating multi-step process, calling customer service, waiting on hold, enduring retention offers, platform exploits 'hassle costs' to trap users in unwanted services.

Critically, the list is 'non-exhaustive' ('including, but not be limited to') - the three examples illustrate principles but don't limit Article 39's scope. Other recognized dark patterns also prohibited: confirmshaming (making privacy-protective choice feel socially undesirable - 'No thanks, I don't care about staying connected with friends'), hidden information (placing critical details in fine print or behind multiple clicks), disguised ads (making ads look like content or interface elements), forced action (requiring unrelated action like sharing on social media to access feature), sneak into basket (adding unwanted items to purchase), misdirection (visual design focusing attention away from important information), trick questions (confusing double-negatives or misleading language making yes appear to mean no).

The materiality threshold ('materially distorts or impairs') acknowledges that all design involves choices influencing user behavior - placing most-used feature prominently is reasonable design, not manipulation. Article 39 targets practices that substantially undermine decision-making capacity, not every design choice with behavioral effects. Guidance emerging from enforcement and audits will clarify boundaries, but principle is: designs should support informed choice, not subvert it.

Article 39 complements but extends beyond GDPR's consent requirements. GDPR Article 7 requires consent be 'freely given' and prohibits making service conditional on unnecessary data processing, but focuses specifically on personal data consent. Article 39 applies to all user decisions: privacy choices, service terms acceptance, feature adoption, purchasing decisions, content engagement. Platform can't defend dark pattern by claiming 'user consented' if interface design vitiated meaningful consent. Article 39 also extends beyond consumer protection law's anti-fraud provisions by addressing manipulation and autonomy impairment beyond outright deception.

Enforcement occurs primarily through Article 36 audits. Independent auditors examine platform interfaces assessing: Are choice architectures neutral or manipulative? Do visual hierarchies support informed decisions? Are termination flows as simple as subscription? Do popups respect user choices? Auditors may conduct user testing, analyze interaction flows, review interface documentation, compare industry best practices. Platforms must demonstrate interface design processes include dark pattern assessments, user autonomy considerations, and regular reviews preventing manipulative practices.

Key Points

  • VLOPs/VLOSEs must not design interfaces that deceive or manipulate users
  • Prohibition on materially distorting or impairing users' ability to make free and informed decisions
  • Specific examples: giving undue prominence to certain choices, repeatedly requesting already-made choices, making cancellation harder than subscription
  • List of prohibited practices is non-exhaustive ('including, but not be limited to')
  • Applies to all interface elements: choice architecture, visual design, interaction flows, terminology, defaults
  • Focuses on 'material' distortion - minor design choices allowed, but manipulative patterns prohibited
  • Complements GDPR's consent requirements with broader autonomy protection
  • Enforcement through Article 36 audits examining interface design practices

Practical Application

For Meta (Facebook - Cookie Consent Interface): Meta's cookie consent interface must comply with Article 39 alongside GDPR requirements. Past interfaces attracted criticism for dark patterns: 'Accept All' button displayed prominently in blue while 'Manage Preferences' appeared as grey text link, creating visual hierarchy favoring data-maximizing choice. Article 39 analysis: Does this prominence materially distort user choice? Audit examination: (1) Measure click-through rates - if 80% click prominent button while only 20% find alternative, design effectiveness demonstrates manipulation potential. (2) Test alternative designs - if equally prominent buttons produce different distribution (e.g., 50/50), original design was distorting. (3) Consider user intent - evidence suggests most users prefer privacy when clearly presented; interface steering them otherwise undermines autonomy. Compliant redesign: Present 'Accept All' and 'Reject All' or 'Manage Preferences' with equal visual prominence (same size, color, position), neutral language ('Your Choice' heading rather than 'We value privacy' above Accept All), clear information about implications of each choice. Meta must demonstrate interface design tested for neutrality rather than optimized for data collection consent rates.

For Amazon (Prime Subscription Cancellation): Amazon Prime illustrates subscription/cancellation asymmetry problem. Subscribing to Prime: single button 'Start Free Trial' on product pages, one-click enrollment, immediate access. Canceling Prime historically required: (1) Find account settings, (2) Navigate to Prime membership section, (3) Click 'Update, cancel and more', (4) Select 'End membership', (5) Navigate through several pages asking 'Are you sure?', (6) Decline offers for discounted continued membership, (7) Click final cancellation confirmation, (8) Sometimes schedule future cancellation rather than immediate cancellation as default. Article 39 analysis: This asymmetry materially impairs cancellation decision - hassle costs deter rational users from canceling even when desired. Compliant approach: Cancellation complexity should not exceed subscription complexity. If Prime enrollment is single-button, cancellation should be accessible from account settings with clear 'Cancel Prime Membership' button, single confirmation dialog explaining implications, immediate cancellation (though can offer option to delay if user prefers). Acceptable: offering discount to stay (single offer, easily declined), explaining features that will be lost, confirming user choice. Prohibited: multi-step maze, repeated nagging, hiding cancellation button, requiring phone call. Amazon must audit subscription/cancellation symmetry demonstrating reasonable balance.

For TikTok (For You Feed Algorithm Consent): TikTok's For You feed uses personalized recommendation algorithm. Article 39 applies when TikTok asks users about algorithm preferences. Prohibited approaches: Default checkbox checked for 'I want personalized content' with small grey link to alternative chronological feed (asymmetric prominence). Popup every session asking 'Switch to personalized feed for better experience? [Yes] [No]' after user selected chronological feed (nagging). Framing as 'Do you want exciting personalized videos [Yes, large button] or boring chronological feed [No thanks, small text]?' (confirmshaming through manipulative language). Compliant approach: During onboarding, present feed choice with equal prominence: 'Choose Your Feed Experience: [For You - Personalized] [Following - Chronological]' with neutral descriptions of each. Remember user choice indefinitely without re-asking. Make switching easy from settings (not buried). Article 40 provides additional requirement for non-personalized feed option, but Article 39 governs interface design around that choice - presentation must support informed decision, not manipulate toward platform-preferred personalization.

For LinkedIn (Job Application Upsells): LinkedIn offers premium features during job application process. Dark pattern concern: User applying for job encounters 'Stand out to recruiters with Premium' popup requiring action - user must either: (1) Subscribe to Premium, or (2) Find small 'X' button in corner to decline (with text like 'Continue without standing out'). This presents forced action during task (applying for job) with confirmshaming language. Article 39 analysis: Popup interferes with user experience by interrupting job application task. Language ('stand out' vs. 'continue without standing out') manipulates through loss framing and social comparison. Compliant approach: Offer Premium as optional enhancement ('Try Premium for advanced features') with equally prominent 'No Thanks, Continue Application' button and neutral language. Don't interrupt application flow - place offer after application submitted or in sidebar without blocking interface. Language shouldn't imply users without Premium are disadvantaged beyond features' actual value. User who declines shouldn't see same offer repeatedly (respect choice). LinkedIn must demonstrate upsell interface design balances business interests with user autonomy.

For Twitter/X (Verification Badge Purchase): Twitter/X subscription to get verification badge involves interface design choices. Compliant: Prominently offering subscription with clear benefits description, easy purchase flow. Potentially problematic if: Making non-subscribers' experience materially worse to pressure subscription (e.g., drastically limiting free features that previously existed, creating artificial friction for non-subscribers). Repeatedly showing subscription popups after user declined (nagging). Using manipulative language suggesting non-subscribers' voices don't matter ('Subscribe to be heard' vs. 'Subscribe for additional features'). Article 39 doesn't prohibit offering premium subscriptions or using visual distinctions (verified badges), but prohibits manipulative pressure tactics materially distorting free user's decision about whether subscription value justifies cost. If platform degrades free experience artificially (not based on actual costs), manipulative design may violate Article 39 depending on severity and materiality of impairment to users' autonomous choice.

For YouTube (Premium Upsell Interface): YouTube offers Premium subscription removing ads. Article 39 considerations: Compliant: Displaying ad-supported free tier and ad-free Premium tier with clear comparison, reasonable frequency of Premium offers, easy dismissal of offers. Potentially problematic: If YouTube showed Premium popup before every video requiring action to dismiss, materially impairing free users' experience to pressure subscription (repeatedly requesting choice). If dismissal button hidden while subscription button prominent (asymmetric prominence). If language frames free tier negatively ('Continue watching with annoying interruptions' vs. neutrally 'Continue with ad-supported viewing'). Article 39 doesn't prohibit business model differences between free and paid tiers, but prohibits interface designs that manipulate users rather than letting them make informed choice based on actual value proposition. YouTube must ensure upsell frequency, prominence, and framing support autonomous decision-making.

For Gaming Platforms (In-App Purchase Design): VLOP gaming platforms designing in-app purchase interfaces must consider Article 39 especially for minors (relates to Article 34 risk assessment of 'negative effects on minors'). Dark patterns particularly problematic: Complexity obfuscation (using game currency hiding real money costs - '100 gems for €0.99' when item costs 5,000 gems requiring mental math). Pressure tactics ('Limited time offer: 2 minutes remaining' with countdown timer, creating artificial urgency). Confirmshaming ('Real gamers upgrade' vs. 'Stay weak'). Making free progression extremely slow while paid progression normal (artificial friction). For minor users especially, Article 39 requires: Clear display of real money costs, age-appropriate purchase interfaces, neutral language, removal of manipulative pressure tactics, reasonable progression in free mode. Platform cannot defend dark patterns targeting minors with 'their choice' when interface design manipulated that choice. Article 36 audits should specifically examine in-app purchase interfaces for dark patterns, particularly those targeting or affecting minors.

For Auditors (Interface Design Testing): Independent auditors assessing Article 39 compliance conduct systematic interface analysis: (1) Choice architecture review - document all significant user decision points (consent flows, subscription/cancellation, privacy settings, feature adoption), analyze visual prominence (button sizes, colors, positions), examine language (neutral vs. leading), test defaults (what happens if user takes no action). (2) User flow testing - navigate subscription and cancellation paths measuring complexity symmetry, identify nagging patterns by tracking popup frequency after user declines, document interface elements interrupting user tasks. (3) Comparative analysis - benchmark against industry best practices for neutral interface design, compare platform's interface evolution (did dark patterns worsen over time?), test alternative designs demonstrating whether original design materially affected user choices. (4) Documentation review - examine platform's interface design process (are dark patterns proactively avoided?), review user research and A/B testing (is platform optimizing for user autonomy or platform interests?), assess governance (who approves interface changes and what criteria apply?). (5) Report findings with specificity: 'Cookie consent interface violates Article 39 - Accept All button 400px blue prominent vs. Manage Preferences 200px grey tertiary position creates asymmetric prominence materially favoring data-maximizing choice, demonstrated by 82% clickthrough to Accept All that decreased to 41% in audit testing with equal prominence design.' Auditor recommendations drive interface redesigns toward compliance.