Campaign Paper: Enforceable User Rights Against Addictive Design
This is the longer-form policy case behind Give Me Back My Sleep. It is designed to make the campaign legible to journalists, researchers, policymakers, and contributors who want to pressure-test the argument.
Executive Summary
This paper follows the same core structure used across the website:
- Baseline ask: the EU should use existing law, especially the Digital Services Act, more aggressively against addictive and manipulative design.
- Stronger ask: EU law should create enforceable user rights to activate binding protections against addictive design.
The stronger ask should include:
- account-level lockouts;
- scheduled no-access windows;
- meaningful feature controls for recommender-driven engagement systems such as autoplay, infinite scroll, and manipulative notifications;
- anti-circumvention duties so platforms cannot reduce those protections to cosmetic settings.
This is not a call to ban social media. It is a call to give users legally protected tools to limit or disable high-risk engagement systems when they choose to do so.
Why This Is A Serious Regulatory Issue
The case does not depend on proving that every form of social media use causes every form of mental harm.
The stronger and more defensible argument is:
- some digital products are deliberately designed to maximise attention and repeated use;
- certain design choices predictably weaken user self-control and encourage compulsive patterns of behaviour;
- the resulting harms can include sleep disruption, impaired concentration, compulsive use, reduced self-control, and worsened mental health outcomes for at least some user groups;
- the scale of the market and the asymmetry between product design and individual restraint justify legal intervention.
That is already enough to support a harm-reduction and user-protection approach.
What Counts As Addictive Design
The campaign should define addictive design narrowly enough to be enforceable.
The target is not “any product people use a lot.” The target is engagement design that predictably pushes people toward compulsive use, weakens stopping cues, or interferes with self-imposed limits. The European Parliament’s 2023 report and the Commission’s TikTok findings already point to concrete examples.
The most relevant features include:
- infinite scroll / endless feeds, which remove stopping cues and make “just one more item” frictionless;
- autoplay, which turns one intentional action into a chain of passive consumption;
- highly personalised recommender systems, which increase compulsion by continuously adapting the next reward;
- push notifications, especially when designed to trigger re-entry rather than communicate something genuinely necessary;
- streaks, badges, and variable rewards, which reward repetition and exploit loss aversion;
- read-pressure features such as seen receipts or social-pressure mechanics that make disengagement feel costly;
- cosmetic wellbeing settings, where a platform appears to offer control but makes those controls too easy to dismiss or reverse.
That is the level of specificity the proposal should keep.
The Two-Track Proposal
The proposal is strongest when it clearly separates what can be pursued now from what likely needs new legislation.
Track One: Stronger Enforcement Under Existing Law
The Commission should use existing powers aggressively where very large platforms:
- rely on addictive engagement mechanics without adequate risk assessment;
- fail to mitigate harms linked to compulsive use;
- offer user controls that are ineffective in practice;
- use recommender and notification systems in ways that predictably undermine user self-control.
In practical terms, the Commission should be prepared to require:
- more effective screen-time break systems;
- night-time protections;
- safer defaults for high-risk engagement features;
- meaningful user controls over recommendations and notifications;
- friction against compulsive re-entry patterns.
Track Two: A New Enforceable User Right
The more ambitious proposal is that EU law should create an explicit user right to activate binding protections against addictive design.
That right should include:
- full self-exclusion from a platform for a chosen period;
- scheduled lockout windows, such as no access during the night or during work or study hours;
- feature-level controls over autoplay, endless feeds, recommendations, read-pressure features, push notifications, streak systems, and similar engagement mechanics;
- effective anti-circumvention protection, so these controls cannot be neutralised by manipulative prompts or cosmetic workarounds.
The Existing EU Foundation
The campaign is not starting from zero. The existing EU framework already contains important legal hooks.
1. The Digital Services Act
The Digital Services Act is already relevant because it addresses interface manipulation, recommender systems, systemic risks, and mitigation duties.
The most relevant anchors are:
- Article 25: bans certain deceptive or manipulative interface design practices.
- Article 27: requires transparency around recommender systems and at least some user-facing options.
- Article 34: requires very large platforms to assess systemic risks.
- Article 35: requires reasonable, proportionate and effective mitigation measures.
These provisions matter because addictive design is not just a product preference issue. It can be framed as an interface-design and systemic-risk issue, especially where recommender systems, autoplay, infinite scroll, or persistent notifications undermine user control. The European Commission is already moving in this direction: in its preliminary findings against TikTok under the DSA, it focused on design choices including autoplay, endless scroll, push notifications, and highly personalised recommendation.
2. European Parliament Position
The European Parliament has already gone further than the current campaign website. In 2023, Parliament called for stronger action against addictive design and, in its formal report A9-0340/2023, explicitly discussed:
- a digital “right not to be disturbed”;
- regulation of practices such as infinite scroll and automatic play;
- ethical and fair digital products by design;
- stronger protection for children and other vulnerable groups.
That is politically important because it shows this campaign is not introducing a fringe idea. It is sharpening and operationalising a direction already recognised inside the EU institutions.
3. Digital Fairness Act Track
The Commission is already working on the broader Digital Fairness Act agenda. That matters because some of the stronger asks in this paper may fit better through consumer and unfair-commercial-practice reform than through the DSA alone.
Why Existing Law Is Still Not Enough
The DSA creates pressure points, but it does not yet clearly give users a direct, enforceable right to self-exclusion or binding lockout.
That gap matters.
Today, many platforms offer “wellbeing” tools that are:
- easy to dismiss;
- easy to reverse;
- hidden in settings;
- dependent on repeated user effort;
- undermined by recommender logic or substitute engagement features.
That is not the same as a real right to impose limits on oneself.
The central legal gap is therefore this:
- the EU recognises harms linked to manipulative and addictive design;
- but users still lack a robust, enforceable mechanism to force platforms to respect self-imposed limits.
Why The Evidence Is Strong Enough To Act
The campaign does not need to prove that every user is harmed in the same way or that every causal pathway is settled beyond dispute.
What it does need to show is:
- there is a credible mechanism;
- there is population-scale exposure;
- there is growing evidence of association and, in some areas, experimental or longitudinal support;
- existing product-side mitigations are weak.
That threshold is already met.
For example:
- a 2024 systematic review with meta-analyses found that problematic social media use was positively associated with depression, anxiety, and sleep problems, and negatively associated with wellbeing in young people;
- a 2024 cross-national study of adolescents in 40 countries found that intense and problematic social media use was linked to sleep difficulties;
- a meta-analysis on social networking site use and loneliness found a positive association between use and loneliness;
- a meta-analysis of longitudinal studies on loneliness and problematic media use supports a reinforcing relationship rather than a simple one-way story.
That does not justify wild causal claims. It does justify precautionary, user-protective intervention.
Why This Is Proportionate
One of the easiest industry attacks is to frame the proposal as paternalistic or unrealistic.
That attack is weaker if the proposal emphasises the following:
- these protections are user-activated, not blanket prohibitions;
- adults remain free to use platforms if they choose;
- the law would not ban social media or recommender systems as such;
- the law would only require firms to respect a user’s own decision to impose limits;
- the proposal is therefore a form of user autonomy protection, not a ban on digital services.
This is one of the proposal’s strongest features. It does not demand one mandatory experience for everyone. It demands that platforms stop overriding the expressed will of users who want stronger limits.
The Mental-Health And Public-Health Case
The campaign should avoid overclaiming that social media alone and directly causes depression in every case.
The stronger evidence posture is:
- heavy or problematic use is associated with worse outcomes in at least some populations, including sleep difficulties and worse mental-health outcomes in large reviews such as this 2024 systematic review with meta-analyses and this 2024 review of adolescent sleep impacts;
- the evidence base is strongest when looking at specific mechanisms, vulnerability, and harmful use patterns rather than simple total screen time, which is also the posture taken in the U.S. Surgeon General’s advisory on social media and youth mental health;
- social connection, sleep, mood, anxiety, and self-control are all legitimate public-health concerns relevant to platform design.
That still leaves a strong case for regulation.
The public-health argument should therefore focus on:
- sleep disruption;
- compulsive use and loss of control;
- mental wellbeing;
- vulnerability of minors and vulnerable adults;
- social isolation / weakened social connection;
- consumer autonomy and freedom from manipulative design.
Why Precedents Matter
The proposal should not argue that social media is identical to tobacco, alcohol, or gambling.
It should argue something narrower and more credible:
- regulators already intervene when products or commercial environments are designed in ways that predictably drive harmful, difficult-to-resist behaviour;
- regulators already accept that warnings, restrictions, self-exclusion, safer defaults, and product-design safeguards can be justified even when ordinary consumer choice still exists;
- the law already recognises that “just use more self-discipline” is not always an adequate policy answer.
The most useful parallel is gambling self-exclusion, because it shows that:
- a person can ask a provider to block them from a harmful product or service, as seen in existing regulatory systems such as the Pennsylvania Gaming Control Board’s self-exclusion tools;
- the provider then has legal duties to respect that choice;
- the point is harm reduction and self-protection, not prohibition for everyone else.
The parallels to tobacco and alcohol are still useful, but mainly as examples of a broader principle:
- once a commercial environment is known to create large-scale, foreseeable harm, product design and access conditions can become legitimate subjects of regulation.
What Platforms Will Say
Any serious paper should anticipate the main objections.
”Users can already log off”
This misses the point. The problem is not whether users are physically capable of closing an app. The problem is that platforms are built to weaken restraint and maximise re-entry, making “choice” a poor proxy for meaningful control.
”This would ban innovation”
No. The proposal does not ban innovation. It requires effective user protections and prevents firms from undermining self-imposed limits.
”Current wellbeing tools are enough”
The Commission’s 2026 preliminary findings on TikTok point the other way. The Commission specifically argued that existing screen-time and parental-control tools may be too easy to dismiss and too weak to mitigate the risks created by addictive design.
”Causation is unproven”
Perfect causal certainty is not the right threshold. EU digital and consumer regulation often acts where risks are credible, large-scale, and difficult for users to manage individually. The case is stronger still when there is a plausible mechanism and mounting evidence of harm.
”This is too broad to enforce”
That is exactly why the proposal must define the target clearly and focus on:
- user-activated lockout and self-exclusion;
- specified engagement features;
- clear anti-circumvention obligations;
- auditable compliance requirements.
What Evidence And Input The Campaign Still Needs
The proposal gets stronger when it is supported by evidence that is concrete, varied, and difficult to dismiss.
The campaign should actively collect:
- stories of sleep loss, compulsive use, loss of self-control, and harm to work, study, or family life;
- examples of existing platform “wellbeing” tools that are ineffective in practice;
- legal analysis identifying where the DSA already helps and where it does not;
- public-health and psychology evidence on compulsive use, sleep disruption, loneliness, and mental-health risks;
- examples of regulated self-exclusion systems in gambling or similar sectors;
- country-specific contacts, NGOs, journalists, and policy actors who can help test and strengthen the proposal.
If you want to strengthen this proposal, the most useful contribution buckets are:
- Stories: first-person accounts with concrete harms and specific product features.
- Legal review: which existing provisions are usable now, and where the argument is still weak.
- Evidence: studies, parliamentary material, public-health reports, enforcement decisions.
- Design analysis: examples of the mechanics platforms use to weaken self-control.
- Country organising: people who can help identify contacts, translate arguments, and build support in different EU member states.