The Limits to Digital Consent

Read the full report

  • Although considerable work has been completed by designers, activists, and aligned policymakers endeavoring to translate data governance policies into comprehensible means of obtaining consent, the question still remains: How well does this work account for the broader socio-technical power structures inherent in all people-centric data collection? 
  • In partnership with the New Design Congress, and with support from Reset, our team conducted a series of interviews with advocates for individuals and communities whose lives are often dramatically affected by data surveillance in effort to assess today’s ethical consent paradigms and determine whether they actually achieve the outcomes they seek.
  • Our joint report reveals that ongoing attempts to incorporate informed consent into data-driven systems likely fall short of their stated goals due to a host of issues including an insufficient understanding of the power dynamics inherent to organizational politics, second- and third-order effects, network complexity at scale, and political accountability.

Obtaining informed consent is an essential goal for designers. But what is “informed consent” in the context of digital power imbalances and dark patterns designed to convince people to hand over their data unconsciously? Many platform designers and policymakers have begun to focus on transparency, ethics, and trust to work towards creating ethical digital consent systems to communicate their intent and obligation to users. Yet this type of communication and initial design framework is only the beginning. Whether seeking to deploy private analytics or develop research partnerships, the reality is data collection and analysis comes with an astonishingly high degree of ever-evolving risk. Given the complexity of the systems and tools involved, and an often undefined timeline of data stewardship, it is extremely difficult to account for and prevent weaponised design.

Knowing this, Simply Secure partnered with the New Design Congress to examine consent paradigms in complex data systems. Through a series of in-depth interviews with advocates for individuals and communities whose lives are often dramatically affected by data surveillance, we set out to determine the extent to which these systems build their desired outcomes. In particular, we wanted to know: Do they truly inform people of the impacts of data collection? And if so, how are changes — in the life of the system as well as the consenting person — accounted for? 

Report Findings

In order to critically assess whether digitally-facilitated consent actually represents a person’s informed understanding of the implication of that consent, we combined in-depth qualitative interviews with a review of the historical and current issues related to data collection. Our report, The Limits to Digital Consent: Understanding the risks of ethical consent and data collection for underrepresented communities, reveals that today’s attempts to cultivate informed consent into data-driven systems likely fall short of their stated goals due to a host of issues including insufficient understanding of the power dynamics inherent to organizational politics, network complexity at scale, political accountability, and the second- or third-order effects of different designs of local-first data strategies. 

Six key findings emerged from our analysis:

  1. The consent model for tech is outdated. Today’s digital consent frameworks are conceptually derived from institutional or academic ethics frameworks, yet digital platforms pose unique challenges that these frameworks are not equipped nor currently designed to address.
  2. Local-first data storage is not inherently safer for people. The common assumption that “local is better” does help mitigate issues related to data abuse or company control – but fails to consider its implications beyond these immediate relationships and how assigning data ownership to an individual can amplify power discrepancies between the user and other non-associated parties. 
  3. Data creation, including the potential for data creation, is silencing. The accelerating rate and scope of data collection, combined with increasing community awareness and savvy surrounding the dangers posed by such datasets, produces a chilling effect on those who wish to speak up and self-advocate but hesitate to do so due to the associated added risk.
  4. Everyone — not just members of underrepresented communities — is at risk. Current data collection can – and almost assuredly will – be used to feed future technologies whose risks are impossible at present to understand, predict, or mitigate (and time and again, even well-intentioned or purportedly neutral datasets have produced disastrous impacts). 
  5. Ethical platform designers must consider themselves as the potential bad actor. While those who demand design justice and ethical platform development are well-intentioned in their pursuit of empowering people, the reality is many recent dangerous technological advances started out just as well-intentioned before resulting in systemic abuse or worse (and it is simply not possible for even the best-intentioned designer to accurately foresee changes in partnerships or leadership through which carelessness or ulterior motives may poison a platform’s original values).
  6. People are overwhelmed by both the potential for harm and the indifference of decision-makers. Across the globe, communities and individuals are paralyzed by the scale and potential of harm, as well as the real or perceived institutional resistance (by both technology companies and legislators) to tackle consent-related problems.

Looking Forward

Data accumulation has great power over a person’s agency, their relationships, and the communities within which they operate – and the associated harms reach almost every human being. Practitioners must therefore examine the systemic shortcomings of digital consent and commit to an ongoing iteration of consent and data governance within platforms. Platform designers and policymakers can no longer assume that collection is safe – and they must work together to design data storage systems accordingly (including on-device local storage).

We believe designers should consider unexpected threat scenarios as part of their design process – leveraging tools like Personas Non Grata, Anxiety Games, Usable Security Audits, or data risk analysis. Doing so allows practitioners to look at the whole socio-technical system and take on a broader understanding of the risks of data accumulation.

Read the full report

Credits

Project Leads: Cade Diehm (The New Design Congress), Kelsey Smith, Ame Elliott, Georgia Bullen

With support from Reset, an initiative engaged in programmatic work on technology and democracy. Reset seeks to change the way the internet enables the spread of news and information so that it serves the public good over corporate and political interests – ensuring tech companies once again work for democracy rather than against it.

Related

Reflections on the Mozilla Fellowships and Awards Impact Evaluation: Lessons for the Ecosystem

We interviewed 47 people and conducted 3 surveys to complete a 5-year retrospective impact report for the Mozilla Foundation’s Fellowships and Awards programs. You can read the full report here. The evaluation presents an in-depth look into the impact, strengths, and challenges of the Mozilla Fellowships and Awards programming, including recommendations for supporting leadership development in funding digital rights and internet freedom. Three key ecosystem findings: 1) Funding is essential and impactful, 2) Measuring impact is hard, and 3) Community is at the core.

Designing for a Global Audience in the Age of Remote Usability Testing

With support from Internews, our team helped improve a VPN that helps human rights defenders and journalists access the internet safely. We implemented a unique usability testing process that protects user privacy and overcomes typical remote testing challenges.

I Read About "Design For Trust"; So You Don't Have To

Here's the difference between the current "design for trust" discourse and how trust actually works.