Editor’s Note: Europe’s DSA enforcement is widening from content moderation into product design—and TikTok is now the test case. The Commission’s preliminary view treats infinite scroll, autoplay, push notifications, and highly personalized recommendations as potential systemic risks that must be assessed, mitigated, and proven with documentation. For cybersecurity, information governance, and eDiscovery teams, the implications are immediate: engagement telemetry, risk assessments, design-review records, and A/B testing artifacts are increasingly likely to become compliance evidence—and discovery targets—whenever regulators question how a platform shapes user behavior.


Content Assessment: EU's Preliminary DSA Findings Put TikTok's Engagement Design in the Regulatory Crosshairs

Information - 92%
Insight - 92%
Relevance - 91%
Objectivity - 90%
Authority - 92%

91%

Excellent

A short percentage-based assessment of the qualitative benefit expressed as a percentage of positive reception of the recent article from ComplexDiscovery OÜ titled, "EU's Preliminary DSA Findings Put TikTok's Engagement Design in the Regulatory Crosshairs."


Industry News – Data Privacy and Protection Beat

EU’s Preliminary DSA Findings Put TikTok’s Engagement Design in the Regulatory Crosshairs

ComplexDiscovery Staff

As Brussels moves to enforce the Digital Services Act (DSA) against TikTok’s “addictive design,” the European Commission’s preliminary findings are drawing attention well beyond one social media app. In its February 6, 2026 announcement, the Commission preliminarily argues that TikTok’s combination of infinite scroll, autoplay, push notifications, and a highly personalized recommender system may create systemic risks to users’ physical and mental well-being, especially minors and vulnerable adults. Regulators contend that continuously serving new videos can encourage compulsive consumption loops and keep users in a low-friction “autopilot” pattern—an argument they tie to research on habit-forming interfaces and prolonged screen time.

European Commission Executive Vice-President for Tech Sovereignty, Security and Democracy Henna Virkkunen framed the stakes in direct terms, stating that social media addiction can harm the developing minds of children and teens, and that the DSA holds platforms responsible for the effects they may have on their users. For cybersecurity, information governance, and eDiscovery professionals who routinely confront the downstream impact of digital behavior, this is less a story about one social app and more an early blueprint for how authorities may scrutinize engagement-driven design across the broader digital ecosystem.

From engagement engine to test case

The current decision sits within a broader DSA enforcement arc that began when the Commission opened formal proceedings against TikTok on February 19, 2024. That initial investigation focused on systemic risks such as the “rabbit hole effect” of the platform’s recommendation systems, age-inappropriate experiences linked to weak age assurance, and the duty to ensure a high level of privacy, safety, and security for minors. Since then, regulators have flagged concerns about TikTok’s advertising repository and transparency obligations. In May 2025, the Commission published preliminary findings of non-compliance on advertising transparency, and in December 2025, it accepted TikTok’s binding commitments to resolve those concerns. Separately, in October 2025, the Commission issued preliminary findings that both TikTok and Meta failed to grant researchers adequate access to public data under the DSA’s transparency obligations—a requirement designed to enable independent scrutiny of platform risks.

In its latest preliminary view, the Commission says TikTok did not adequately assess how certain design elements could affect users’ health and well-being, and it questions whether the company sufficiently acted on indicators it associates with compulsive use. The Commission points to behavioral signals it links to problematic engagement, including reported patterns of late-night use among minors and high-frequency app openings, alongside other internal metrics. A Commission spokesperson provided additional context during a press briefing, stating that TikTok is the most-used platform after midnight among children aged 13 to 18 in the EU, and that 7% of children between 12 and 15 spend four to five hours daily on the app. Coverage of the Commission’s announcement similarly reports concern about heavy late-night use by children and teens in the EU, reinforcing regulators’ view that certain design choices may be amplifying risky engagement patterns.

For large platforms, the signal is that collecting rich telemetry may not be enough; regulators appear to expect that such data feeds into documented risk assessments, mitigations, and decision-making, and they are willing to examine internal documentation and models to see whether that happened in practice. For organizations that manage large-scale user data, it is a useful reminder to maintain audit-ready records of how behavioral insights inform product decisions, not just marketing or growth tactics.

TikTok, for its part, forcefully disputes the EU’s characterization. A TikTok spokesperson told Euronews that the Commission’s preliminary findings “present a categorically false and entirely meritless depiction of our platform” and indicated that the company intends to “challenge these findings through every means available to us.” The company pointed to existing safeguards, including daily screen-time limits, sleep-hours reminders that prompt users to close the app for the night, screen-time break features, an in-app screen-time dashboard, and “well-being missions” that reward users with badges for sticking to usage limits. TikTok also argues there is no scientific consensus on the impact of screen time, and that its platform offers tools to help users make their own decisions about how much time to spend on it. That response underscores that the proceedings remain contested and that the final outcome—both in terms of legal findings and required changes—has yet to be determined.

Risk mitigation, friction, and perception

One of the Commission’s sharpest criticisms is that certain safety features, as implemented, may be too easy to bypass in practice. Daily screen-time limits and parental controls exist, but the Commission found that warnings can be dismissed with minimal friction, and parental tools demand additional time and technical familiarity that many households lack. From a governance standpoint, that raises a familiar question: when does a control become so effortless to override that it functions as largely symbolic?

For teams designing high-risk digital services, a practical step is to pressure-test whether key safeguards introduce meaningful friction at the moments that matter: late at night, after extended use, or when minors attempt to change settings. That might mean requiring stronger re-authentication before disabling time limits, randomly varying break prompts to avoid habituation, or providing parents with digestible summaries of usage patterns rather than expecting them to dig into dashboards. The Commission suggests that, if its preliminary concerns are confirmed, TikTok could be required to make more fundamental design changes in Europe, including limiting or disabling infinite scroll over time, enforcing effective screen-time breaks (especially overnight), and adjusting its recommendation system to reduce compulsive behavior. Even if your organization is far removed from consumer social media, building and documenting this kind of friction can demonstrate to regulators that safety is woven into the experience rather than bolted on.

Enforcement perimeter and financial stakes

The DSA gives the Commission wide latitude to respond if preliminary findings are confirmed after TikTok exercises its right of defense. TikTok can review the case file and respond in writing, and the European Board for Digital Services will be consulted before any final non-compliance decision. If the Commission ultimately concludes that the app’s design breaches the DSA, it can impose fines of up to 6% of TikTok’s global annual turnover and potentially order changes to product features or impose ongoing monitoring obligations. No timeline has been given for a final decision.

This is designed to reach the attention of boardrooms, not just trust-and-safety teams. For companies operating large platforms—whether public-facing or enterprise-oriented—the TikTok case illustrates how design choices around engagement and attention can become headline regulatory issues alongside more familiar concerns like data breaches or illegal content. If you are responsible for cyber or governance functions, an immediate internal action is to ensure that systemic-risk assessments, design reviews, and decision logs are being treated as formal compliance artifacts, not just internal memos. Those materials are exactly what regulators, plaintiffs’ counsel, or discovery teams are likely to request in the next wave of platform investigations.

Why this matters beyond social media

For cybersecurity professionals, the TikTok investigation sits squarely in the broader conversation about digital resilience and user exposure. From a cybersecurity perspective, extended late-night use can plausibly increase human-factor risk—potentially raising susceptibility to phishing, oversharing, and rapid propagation of misleading or harmful content—though the degree of impact will vary by context and user population. Framing addictive design strictly as a youth well-being issue risks underestimating its potential impact on threat surfaces and human-factor risk.

Information governance leaders will recognize the DSA’s logic: know your systemic risks, understand how your systems influence behavior, and maintain auditable evidence that you are mitigating those risks. The Commission’s focus on access to public data for researchers, robustness of ad repositories, and quality of risk assessments resembles the documentation and transparency expectations already familiar in records management and compliance programs. Pulling TikTok-style cases into internal training or tabletop exercises can help governance teams explain why behavioral data, log retention, and cross-functional review processes matter long before a regulator comes calling.

For eDiscovery practitioners, the proceedings are a preview of the kinds of digital evidence that may define future disputes over algorithms and dark patterns. Risk reports, A/B tests on prompts and breaks, internal debates over friction versus engagement, and correspondence with regulators are all potential discovery targets in DSA-related litigation or regulatory follow-on actions. A practical move now is to work with legal, product, and security colleagues to identify where these materials live, how they are retained, and how legal holds would be triggered if your organization faced a TikTok-style inquiry.

As TikTok prepares its defense and the Commission advances its investigation, the case is becoming a global reference point for how regulators translate concern about “addictive design” into concrete obligations, evidence demands, and potential sanctions. For organizations building or relying on attention-driven services, the question may be closer to home: if an authority opened your risk assessments, logs, and design decisions tomorrow, would they find a platform that treats user well-being as a core design requirement—or one where that question was deferred until a regulator raised it?

News Sources



Assisted by GAI and LLM Technologies

Additional Reading

Source: ComplexDiscovery OÜ

ComplexDiscovery’s mission is to enable clarity for complex decisions by providing independent, data‑driven reporting, research, and commentary that make digital risk, legal technology, and regulatory change more legible for practitioners, policymakers, and business leaders.

 

Have a Request?

If you have information or offering requests that you would like to ask us about, please let us know, and we will make our response to you a priority.

ComplexDiscovery OÜ is an independent digital publication and research organization based in Tallinn, Estonia. ComplexDiscovery covers cybersecurity, data privacy, regulatory compliance, and eDiscovery, with reporting that connects legal and business technology developments—including high-growth startup trends—to international business, policy, and global security dynamics. Focusing on technology and risk issues shaped by cross-border regulation and geopolitical complexity, ComplexDiscovery delivers editorial coverage, original analysis, and curated briefings for a global audience of legal, compliance, security, and technology professionals. Learn more at ComplexDiscovery.com.

 

Generative Artificial Intelligence and Large Language Model Use

ComplexDiscovery OÜ recognizes the value of GAI and LLM tools in streamlining content creation processes and enhancing the overall quality of its research, writing, and editing efforts. To this end, ComplexDiscovery OÜ regularly employs GAI tools, including ChatGPT, Claude, Gemini, Grammarly, Midjourney, and Perplexity, to assist, augment, and accelerate the development and publication of both new and revised content in posts and pages published (initiated in late 2022).

ComplexDiscovery also provides a ChatGPT-powered AI article assistant for its users. This feature leverages LLM capabilities to generate relevant and valuable insights related to specific page and post content published on ComplexDiscovery.com. By offering this AI-driven service, ComplexDiscovery OÜ aims to create a more interactive and engaging experience for its users, while highlighting the importance of responsible and ethical use of GAI and LLM technologies.