Chapter 3|Additional Obligations for Very Large Platforms|📖 11 min read
1. Providers of very large online platforms and of very large online search engines shall put in place reasonable, proportionate and effective mitigation measures, tailored to the specific systemic risks identified pursuant to Article 34, with particular consideration to the impacts of such measures on fundamental rights. The mitigation measures may include, where applicable, the following:
(a) adapting the design, features or functioning of their services, including the online interface;
(b) adapting their content moderation, recommender systems, advertising systems, terms and conditions or their decision-making processes;
(c) taking targeted measures aimed at limiting the display of, or expeditiously removing or disabling access to, certain specific content, such as illegal content, or content incompatible with their terms and conditions, including by means of expedited procedures;
(d) testing and adapting their algorithms, including recommender systems;
(e) taking measures aimed at preventing or limiting manipulative or exploitative use of their service;
(f) taking measures aimed at enhancing the user's awareness and understanding of manipulative techniques and exploitation;
(g) taking measures aimed at enhancing the user's ability to make an informed decision, enabling and facilitating user choice and control over exposure to information;
(h) reinforcing internal processes, resources, testing, documentation or supervision specifically dedicated to mitigating systemic risks;
(i) initiating or increasing cooperation with trusted flaggers appointed in accordance with Article 22, with other providers of online platforms and of online search engines, public authorities, and civil society organisations, in particular with a view to fostering the monitoring and development of codes of conduct as provided for in Article 45, and of crisis protocols as provided for in Article 48;
(j) taking measures aimed at ensuring a better understanding of, and to publicly explain, aspects of their functioning that contribute to systemic risks as identified pursuant to Article 34, in particular the recommender systems or content moderation systems.
2. The Commission may issue general guidelines on the application of paragraph 1, in accordance with the advisory procedure referred to in Article 88(1). When preparing those guidelines, the Commission shall organise public consultations.
Understanding This Article
Article 35 operationalizes Article 34's risk assessments - identifying risks isn't enough, platforms must take concrete actions to mitigate them. This is the 'teeth' of VLOP regulation: if Facebook's Article 34 assessment finds News Feed amplifies polarization, Article 35 requires Facebook to actually DO something about it. The requirement for 'reasonable, proportionate and effective' measures creates legal standard regulators can enforce.
Paragraph 1 mandates platforms 'put in place' (implement, not just plan) mitigation measures that are: (1) Reasonable - not requiring impossible or absurd interventions, but going beyond minimal efforts; (2) Proportionate - scaled to severity of risk, not using sledgehammer for nail; (3) Effective - actually reducing identified risks, not just theater; (4) Tailored - specific to platform's identified risks, not copy-paste generic measures. The requirement to consider 'impacts on fundamental rights' prevents overly aggressive mitigation that suppresses expression or discriminates.
Paragraph 1 lists ten example measures (not exhaustive, platforms can implement others): (a) Adapting design, features, functioning, or interface - fundamental platform changes. If assessment finds infinite scroll increases addictiveness harming mental health, mitigation might add usage time limits or scroll stopping points. If autoplay drives harmful content consumption, disable it. (b) Adapting content moderation, recommender systems, advertising, terms, or decision-making - operational changes. If recommendation algorithm amplifies extremism, adjust to promote diverse viewpoints. If ad targeting enables discrimination, restrict sensitive categories. If terms inadequately address harassment, strengthen policies. (c) Targeted measures limiting display or expeditiously removing specific illegal/prohibited content - enhanced enforcement. If assessment finds certain harmful content categories systematically evade moderation, implement specialized detection. Not general censorship, but targeted removal of genuinely harmful content. (d) Testing and adapting algorithms, especially recommenders - the algorithmic accountability measure. If YouTube's recommendation creates 'rabbit holes', test adjustments promoting diversity, reducing sensationalism. Implement A/B testing of algorithm variations to verify harm reduction. (e) Preventing manipulative or exploitative use - anti-abuse measures. If platform features enable coordinated disinformation campaigns, implement detection and disruption. If interface dark patterns manipulate users, redesign for clarity. (f) Enhancing user awareness of manipulation - user education. If assessment finds users susceptible to deepfakes or fake accounts, implement literacy campaigns, labeling systems. Inform users about manipulation tactics. (g) Enhancing user choice and control - empowerment measures. If algorithm creates filter bubbles, give users ability to see diverse content, adjust preferences, understand why content shown. Enable informed user agency rather than algorithmic determinism. (h) Reinforcing internal processes, resources, testing, documentation, supervision - institutional measures. If risk management inadequate, hire more trust & safety staff, improve documentation, enhance testing, strengthen oversight. Build organizational capacity for sustained risk mitigation. (i) Increasing cooperation with trusted flaggers, platforms, authorities, civil society - collaborative measures. Participate in cross-platform information sharing, contribute to industry codes of conduct, engage crisis protocols. Recognize platforms can't solve systemic problems alone. (j) Better understanding and publicly explaining functioning contributing to risks - transparency measures. Publish research on how algorithms work, share data with researchers, explain ranking/recommendation logic publicly. Convert black box systems into understandable processes.
The 'may include' phrasing means these ten aren't exclusive or mandatory - platforms choose measures appropriate to their specific risks. LinkedIn addressing employment discrimination risks might focus on measures (b) (algorithm testing for bias), (h) (hiring bias auditing team), (j) (explaining job recommendation logic). TikTok addressing minor safety might emphasize (a) (design changes for child protection), (f) (educating teens about harmful trends), (e) (preventing predatory behavior).
Paragraph 2 authorizes Commission to issue general guidelines on applying Article 35, following public consultation. This enables Commission to provide clarity on what constitutes 'reasonable, proportionate, effective' mitigation without prescribing specific measures. Guidelines might address: how to measure mitigation effectiveness, when measures are proportionate, how to balance risk reduction with rights protection, what documentation regulators expect.
Article 35's power is in accountability flip: platforms can no longer simply document risks while continuing business-as-usual. If assessment identifies harm, platform must act. If mitigation proves ineffective, platform must strengthen it. Article 37 audits verify compliance, Article 74 penalties enforce non-compliance. This creates iterative cycle: assess risks → implement mitigation → audit effectiveness → adjust measures → reassess risks. Systemic risk management becomes ongoing operational obligation, not compliance checkbox.
Key Points
VLOPs/VLOSEs must implement 'reasonable, proportionate, effective' mitigation for Article 34 identified risks
Measures must consider fundamental rights impacts
10 example measures: adapt design/features, modify algorithms, enhance content moderation, limit harmful content display, prevent manipulation, increase user awareness/control, strengthen internal processes, cooperate with trusted flaggers/authorities, explain systems publicly
Not prescriptive - platforms have flexibility to choose appropriate measures for their risks
Obligation is outcome-focused: measures must be effective, not just performative
Commission may issue guidelines on application
Mitigation must be tailored to platform's specific identified risks, not generic compliance
Creates accountability: platforms can't just identify risks, they must ACT to address them
Practical Application
For Facebook (Meta VLOP - Comprehensive Mitigation): Following Article 34 risk assessment identifying News Feed polarization, ad targeting discrimination, teen mental health impacts, Facebook must implement tailored mitigation: (1) Algorithm Adjustments (measures b, d): Modify News Feed algorithm to reduce engagement optimization from divisive content, increase diversity injection showing users content outside echo chambers, test algorithm variants measuring polarization impacts, implement safeguards preventing radicalization pathways. (2) Advertising Restrictions (measure b): Prohibit or severely restrict micro-targeting for political ads, eliminate discriminatory ad targeting options (age/gender restrictions for housing/employment), require advertiser verification for sensitive categories, increase ad transparency. (3) Minor Protection (measures a, f, g): Implement default-private accounts for under-18s, restrict DM functionality for minors, add usage time warnings, create 'take a break' prompts after extended use, enhance parental controls, educate teens about harmful content, enable content sensitivity controls. (4) Content Moderation Enhancement (measures c, h): Increase investment in hate speech detection, improve appeals processes, hire more human reviewers for nuanced decisions, accelerate removal of coordinated inauthentic behavior. (5) Transparency (measure j): Publish regular reports on algorithmic recommendations, explain News Feed ranking factors, provide researchers API access to study platform impacts. (6) User Empowerment (measure g): Enable chronological feed option, give users control over recommendation preferences, allow users to see why content shown, implement 'why am I seeing this ad?' explanations. Document all measures, measure effectiveness quarterly, adjust if insufficient. If polarization persists despite algorithm changes, implement more aggressive interventions.
For YouTube (Google VLOP - Algorithm-Centric Mitigation): YouTube's Article 34 assessment identifies recommendation 'rabbit holes', extremism amplification, child safety risks. Article 35 mitigation: (1) Recommendation Algorithm Redesign (measures b, d): Reduce 'rabbit hole' effect by limiting progression toward increasingly extreme content, implement 'circuit breakers' preventing consecutive problematic recommendations, increase authoritative source promotion for sensitive topics (health, elections, breaking news), test algorithm variants prioritizing video quality over engagement for borderline content. (2) Child Protection (measures a, c, e, f): Enhance YouTube Kids content filtering, improve age verification for main platform, prohibit personalized ads to minors, expedite removal of child exploitation content using specialized detection, educate families about child online safety. (3) Borderline Content Strategy (measure c): Create 'borderline content' category (not illegal but harmful) and reduce recommendations without outright removal, preserving expression while limiting amplification. (4) Transparency & User Control (measures g, j): Enable 'non-personalized recommendations' feed showing diverse content without user profiling, explain recommendation logic publicly, allow users to reset recommendation history, provide content controls for sensitive topics. (5) Monetization Reforms (measure b): Demonetize content violating policies faster, prevent algorithmic amplification of policy-violating content, reduce financial incentives for sensational borderline content. (6) Crisis Response (measure i): Participate in cross-platform information sharing during elections/emergencies, coordinate with authorities on breaking situations, implement rapid response protocols. Continuously test algorithm changes measuring recommendation quality, user satisfaction, harm indicators. If 'rabbit hole' effects persist, increase intervention aggressiveness.
For TikTok (Chinese-Owned VLOP - Minor Safety Focus): TikTok's Article 34 assessment identifies For You algorithm risks for minors, data security concerns, potential foreign influence. Article 35 mitigation: (1) Minor-Focused Algorithm (measures a, b, d): Create separate For You algorithm for users under 18, reducing exposure to potentially harmful viral trends, limiting addictive design elements for minors, testing age-appropriate content recommendations, preventing dangerous challenge amplification. (2) Age Verification & Access Controls (measures a, e): Implement robust age verification at registration, create restricted mode for minors limiting features, prohibit adult strangers from DMing minors, disable location sharing for under-18s, require parental consent for accounts under age thresholds. (3) Harmful Trend Detection (measures c, f): Build specialized systems detecting dangerous viral challenges (self-harm, risky behavior), expedite removal of trend-related content, proactively educate users when dangerous trends emerge, display warnings about harmful challenges. (4) Data Localization (measure a): Address EU data security concerns by localizing EU user data storage in EU data centers, limiting data transfers to China, implementing strict access controls, transparent data flow documentation. (5) Content Moderation Transparency (measures h, j): Publish detailed reports on content removal, especially for EU users, explain how For You algorithm works, demonstrate Chinese government doesn't influence EU content moderation, undergo independent audits of moderation processes. (6) Mental Health Safeguards (measures f, g): Implement usage time limits for teens, create breaks after extended viewing, filter mental health-harmful content (eating disorders, self-harm), provide mental health resources, enable more granular content controls. Given geopolitical scrutiny, TikTok's mitigation must be more aggressive and transparent than domestic platforms. Regular third-party audits verify Chinese government non-interference.
For Twitter/X (Post-Musk Mitigation): Twitter/X's Article 34 assessment identifies risks from staff reductions, policy changes, verification system changes. Article 35 mitigation: (1) Moderation Capacity Restoration (measure h): If assessment finds moderation staff cuts increased harmful content prevalence, rehire sufficient moderators or improve automated systems compensating for human reviewer reductions. Can't allow risk increase from cost-cutting. (2) Verification System Reform (measure b): If 'Twitter Blue' paid verification enables impersonation/fraud, implement stronger verification requirements, distinguish 'paid subscriber' from 'verified identity', restore previous verification for public figures/officials, prevent verified accounts from serial violations. (3) Policy Enforcement Consistency (measures b, c): If policy changes increased hate speech/harassment, strengthen enforcement, reverse problematic policy relaxations, expedite removal of coordinated harassment campaigns, protect vulnerable users from sustained abuse. (4) Misinformation Mitigation (measures f, j): If community notes insufficient for combating misinformation, supplement with additional measures like reduced amplification of misleading content, clearer labeling, increased authoritative source promotion during crises, user education about information literacy. (5) Algorithmic Timeline Controls (measures d, g): Provide users clear choice between algorithmic and chronological timelines, explain algorithmic ranking factors, test algorithm variants reducing polarization and outrage amplification, enable user customization of timeline preferences. (6) Transparency About Changes (measure j): Publish detailed transparency reports showing pre/post-ownership changes in content moderation metrics, harmful content prevalence, policy enforcement statistics, demonstrating DSA compliance despite operational changes. Twitter/X's unique challenge: must demonstrate that ownership changes and 'free speech absolutist' philosophy don't conflict with DSA obligations. If assessment shows increased risks from recent changes, Article 35 may require reversing some changes.
For LinkedIn (Professional Platform Mitigation): LinkedIn's Article 34 assessment identifies job algorithm bias risks, employment discrimination potential, professional misinformation. Article 35 mitigation: (1) Algorithm Bias Testing (measures b, d, h): Regularly audit job recommendation algorithms for age/gender/race bias, test algorithm variants for fairness, document bias testing methodology, hire diversity/fairness experts, implement algorithmic fairness safeguards preventing discriminatory job visibility. (2) Recruiter Tool Restrictions (measure b): Prohibit or restrict discriminatory search filters, prevent employers from targeting job ads by protected characteristics, implement warnings when recruiter searches appear discriminatory, audit recruiter tool usage for discrimination patterns. (3) Credential Verification (measures e, j): Enhance verification of professional credentials, degrees, employment history, implement fraud detection for fake qualifications, partner with universities/companies for credential authentication, increase transparency about verification processes. (4) Professional Harassment Prevention (measure c): Implement specialized detection for professional harassment (unwanted recruiting, inappropriate messages, workplace bullying spillover), expedite removal of harassing content, protect users reporting workplace issues, prevent retaliation coordination. (5) Employment Scam Detection (measures c, e): Build systems detecting fake job postings, work-from-home scams, pyramid schemes disguised as opportunities, remove fraudulent listings expeditiously, warn users about common employment fraud. (6) Mental Health Considerations (measures f, g): Address 'comparison anxiety' from curated professional success by diversifying feed content, adding context about behind-the-scenes struggles, enabling users to customize professional/personal content balance. LinkedIn's mitigation focuses more on labor market fairness and professional integrity than viral content moderation.