Chapter 3|Additional Obligations for Very Large Platforms|📖 9 min read
1. Providers of very large online platforms and of very large online search engines shall identify, analyse and assess any systemic risks in the Union stemming from the design, functioning or use, including manipulative or exploitative use, of their services or related technological systems, or from the specific characteristics of the content disseminated on their services, in particular:
(a) the dissemination of illegal content through their services;
(b) any actual or foreseeable negative effects for the exercise of fundamental rights, in particular the fundamental rights to respect for private and family life, freedom of expression and information, the prohibition of discrimination and the rights of the child;
(c) any actual or foreseeable negative effects on civic discourse and electoral processes, and public security;
(d) any actual or foreseeable negative effects in relation to gender-based violence, the protection of public health and minors and serious negative consequences to the person's physical and mental well-being.
2. When conducting the risk assessments pursuant to paragraph 1, providers of very large online platforms and of very large online search engines shall take into account, in particular, how the following factors influence the systemic risks referred to in that paragraph:
(a) the content moderation systems of the provider, including algorithmic decision-making and content recommendation systems;
(b) the terms and conditions of use;
(c) systems for selecting and presenting advertisements, where applicable;
(d) data-related practices of the provider.
3. Providers of very large online platforms and of very large online search engines shall carry out the risk assessments referred to in paragraph 1 at least once a year, and in any event prior to deploying any new functionality that may have a critical impact on the systemic risks identified pursuant to paragraph 1.
Understanding This Article
Article 34 establishes the foundational VLOP obligation: mandatory systemic risk assessment. Unlike traditional platform moderation focused on individual pieces of content, Article 34 requires VLOPs to analyze how their services' design, algorithms, and scale create society-wide risks. This represents fundamental shift from 'is this specific post illegal?' to 'does our recommendation algorithm systematically amplify health misinformation?' It's the difference between removing individual hate speech posts versus analyzing whether platform design incentivizes engagement through divisive content.
Paragraph 1 mandates identification, analysis, and assessment of 'systemic risks in the Union' from service design, functioning, use (including manipulative/exploitative use), technological systems, or content characteristics. Four specific risk categories must be assessed: (a) Illegal content dissemination - not just presence of illegal content (all platforms have some), but systemic factors enabling its spread. Does algorithm amplify copyright-infringing content because it's engaging? Do platform features enable coordinated child exploitation networks? (b) Fundamental rights impacts - particularly privacy, freedom of expression, non-discrimination, children's rights. Does content moderation disproportionately silence minority voices? Do recommendation algorithms create discriminatory outcomes? Does data collection undermine privacy rights? (c) Civic discourse, electoral processes, public security impacts - the 'democracy' risks. Do algorithms create polarization bubbles? Does platform enable foreign election interference? Do viral misinformation patterns undermine public security? (d) Gender-based violence, public health, minor protection, physical/mental well-being - the 'harm' risks. Does platform enable systematic harassment of women? Do recommendation algorithms push eating disorder content to vulnerable teens? Does misinformation undermine vaccine uptake?
The 'in particular' phrasing means these four categories aren't exclusive - platforms must assess other systemic risks beyond this list if relevant to their services. LinkedIn might assess risks to labor market fairness; dating apps might assess risks of intimate image abuse; gaming platforms might assess risks of predatory monetization targeting minors.
Paragraph 2 requires assessing how specific platform factors influence systemic risks: (a) Content moderation systems, including algorithmic decision-making and recommendations - the core focus. How do recommendation algorithms affect risk? If YouTube's algorithm recommends progressively more extreme content to keep viewers engaged, how does this amplify radicalization risk? If TikTok's For You page optimizes for engagement through provocative content, how does this affect minor mental health? (b) Terms and conditions - do platform rules create risks? If terms ban political speech, does this undermine civic discourse? If terms permit certain harassment, does this enable gender-based violence? (c) Advertising systems - how do ad targeting and delivery create risks? Does micro-targeted political advertising enable manipulation? Does advertising to minors exploit vulnerabilities? (d) Data practices - do collection, use, retention practices create risks? Does extensive behavioral profiling enable manipulation? Do data breaches create security risks?
Paragraph 3 establishes timing: assessments must occur 'at least once a year' and 'prior to deploying any new functionality that may have critical impact on systemic risks'. Annual cycle ensures ongoing monitoring as platform, users, and risks evolve. Pre-deployment requirement prevents platforms from launching risky features first, assessing harm later. If TikTok wants to add livestreaming (significant new functionality), it must assess systemic risks (e.g., live child safety risks, real-time harassment) before launch, not after problems emerge.
Article 34's power is in paradigm shift: from 'illegal content whack-a-mole' to 'systemic risk management'. Facebook can't just remove individual hate posts; it must assess whether News Feed algorithm systematically amplifies divisive content. YouTube can't just age-restrict individual videos; it must analyze whether recommendations push harmful content to minors. Platforms must think like public health officials analyzing disease vectors, not just emergency room doctors treating individual patients.
Key Points
VLOPs/VLOSEs must conduct annual systemic risk assessments
Must analyze risks to: illegal content dissemination, fundamental rights, civic discourse/elections, public security, public health, minor protection, gender-based violence
Consider factors: content moderation systems (including algorithms), terms of use, advertising systems, data practices
Assessment required before deploying new features with critical systemic impact
Focuses on systemic risks (society-wide impacts), not individual content decisions
Must consider 'manipulative or exploitative use' of platform features
Establishes proactive risk management rather than reactive enforcement
Forms basis for Article 35 risk mitigation measures
Practical Application
For Facebook (Meta VLOP - Comprehensive Assessment): Facebook's annual risk assessment must analyze: (1) Illegal Content Risks: Do News Feed and Groups algorithms amplify illegal content (copyright infringement, child exploitation imagery, terrorist content) because such content drives engagement? Do platform features enable coordinated illegal activity networks? (2) Fundamental Rights: Does content moderation disproportionately affect certain demographic groups or political viewpoints? Do ad targeting practices enable discriminatory outcomes (housing ads excluding minorities, job ads excluding women)? Does extensive data collection undermine privacy rights? (3) Civic Discourse & Elections: Does News Feed algorithm create echo chambers and polarization? Do micro-targeted political ads enable manipulation? Can foreign actors use platform for election interference? How do fact-checking and content reduction measures affect democratic discourse? (4) Harm Risks: Does algorithm amplification of divisive content contribute to gender-based harassment? Do Groups enable body-shaming communities affecting mental health? How does platform affect teen well-being? Assessment must consider: algorithmic recommendation systems (News Feed, Reels, Suggested Groups), content moderation (automated detection, human review, appeals), terms of use, ad targeting systems, data collection practices. Document findings in formal risk assessment report. Update annually and before launching major features (e.g., new AI-powered recommendation system). Use findings to inform Article 35 risk mitigation.
For YouTube (Google VLOP - Algorithm-Focused Assessment): YouTube's recommendation algorithm drives 70%+ of watch time - primary systemic risk vector. Risk assessment must analyze: (1) Illegal Content: Does recommendation algorithm amplify copyright-infringing content, child exploitation, terrorism because such content is engaging? Do monetization systems incentivize illegal uploads? (2) Fundamental Rights: Do recommendations and content policies affect free expression? Does algorithmic amplification discriminate against certain voices? (3) Civic Discourse: Does 'rabbit hole' effect push users toward increasingly extreme political content? How do recommendations affect election-related misinformation spread? (4) Harm Risks: Do recommendations push harmful content to minors (suicide, eating disorders, self-harm)? How do beauty/fitness recommendations affect body image? How does platform affect child mental health? Specific assessment areas: recommendation algorithm mechanics (engagement optimization, diversity injection, harmful content suppression); age verification and minor protection systems; monetization incentives; comments section dynamics; livestream risks. Conduct formal assessment annually with cross-functional team (ML engineers, policy, legal, trust & safety). Before launching features like improved recommendation models or new content formats, assess systemic risk impact.
For TikTok (Chinese-Owned VLOP - Unique Risk Profile): TikTok's young user base and Chinese ownership create distinct risk assessment requirements: (1) Illegal Content: Does For You algorithm amplify dangerous challenges, child exploitation, circumvention of age restrictions? (2) Fundamental Rights: How do content policies and moderation affect free expression? Does Chinese government influence content decisions? How does extensive data collection affect privacy, especially for youth? (3) Civic Discourse: Can platform be used for foreign influence operations given Chinese ownership? How do recommendations affect political discourse in EU Member States? (4) Harm Risks: Does algorithm amplification of viral trends push harmful content to minors (dangerous challenges, eating disorders, bullying)? How does app's addictive design affect teen mental health? Does platform enable predatory behavior toward minors? Assessment must address: For You algorithm's impact on minors; data flows to China and potential national security implications; content moderation consistency across geographies; age verification effectiveness; livestream and private messaging risks for minors. Given geopolitical concerns, assessment should address foreign influence and data security more extensively than domestic platforms. Update before launching features affecting minors or data handling.
For Twitter/X (Post-Musk Ownership - Transition Risks): Twitter/X risk assessment must address post-2022 ownership changes: (1) Illegal Content: How did moderation staff reductions affect illegal content prevalence? Do 'Twitter Blue' verification changes enable impersonation and fraud? (2) Fundamental Rights: How do policy changes (reduced hate speech enforcement, account reinstatements) affect minorities? Does paid verification undermine information integrity? (3) Civic Discourse & Elections: Do changes to content policies affect misinformation spread? How does algorithm timeline affect political discourse? Can reinstated accounts be used for coordinated manipulation? (4) Harm Risks: How do policy changes affect harassment patterns, particularly against women and minorities? Assessment must candidly evaluate impact of ownership changes, staff reductions, policy shifts. Consider: algorithmic timeline versus chronological; Twitter Blue paid verification impacts; community notes fact-checking sufficiency; reduced moderation capacity effects; bot detection after API changes. Document risks from rapid platform changes. If assessments identify increased risks from recent changes, Article 35 requires mitigation measures (potentially reversing some changes).
For LinkedIn (Professional Network VLOP - Labor Market Risks): LinkedIn's professional focus creates unique risk assessment areas: (1) Illegal Content: Do job postings enable employment scams? Does platform enable professional credential fraud? (2) Fundamental Rights: Do job recommendation algorithms create discriminatory outcomes (age, gender, race bias in job visibility)? Does platform enable employment discrimination? How does data collection affect professional privacy? (3) Civic Discourse: How does platform affect professional discourse on political/social issues? (4) Harm Risks: Does platform enable professional harassment? Do features create mental health impacts (comparison anxiety, imposter syndrome)? Does platform enable corporate espionage or data misuse? Assess: job recommendation algorithms for bias; recruiter tools for discriminatory use; data practices affecting professional privacy; verification of credentials and companies; anti-harassment measures in professional context. LinkedIn assessment is less focused on viral misinformation (limited on professional platform) and more on labor market fairness, professional integrity, data security. Unique consideration: platform's role in employment markets creates fairness obligations beyond typical social media.
For Amazon (Marketplace VLOP - Product Safety & Market Risks): Amazon as product marketplace faces different systemic risks than social platforms: (1) Illegal Content: Do recommendation and search algorithms surface illegal products (counterfeits, dangerous goods, prohibited items)? Do marketplace features enable systematic violation of product regulations? (2) Fundamental Rights: Do pricing algorithms create discriminatory outcomes? Does platform surveil workers in ways undermining rights? (3) Civic Discourse: Less relevant for marketplace, but consider: do product reviews enable coordinated manipulation affecting public discourse about brands? (4) Harm Risks: Do recommendation algorithms prioritize dangerous products? Does marketplace enable systematic sale of unsafe goods to vulnerable populations? Assess: product recommendation and search algorithms; seller verification effectiveness; product compliance checking; review manipulation and fake reviews; pricing algorithm fairness; worker monitoring practices. Amazon risk assessment focuses more on consumer safety, market integrity, fair competition than typical social media platform.