We are constantly generating data – through our smart devices, from our interactions with those around us and as a by-product of our participation on the internet. Each separate element of data that we generate is, of itself, innocuous and may even be irrelevant from a privacy perspective. But when combined with other similarly individual elements of data, it is capable of being transformed into sensitive personal information. As each of these individual data points is layered one on top of the other, patterns and trends emerge from the stacked data to create profiles of a person or generate patterns that are unique to the individual.
Our online personality exists at the interstices of these various layers of data. Businesses are building increasingly accurate personal profiles of us in order to be able to deliver to us products and services that we like and would appreciate receiving. They have devoted considerable effort to aggregating details of our habits, our likes and dislikes and other distinguishing features that make us who we are. They collect every piece of non-personal information that they can from their interactions with us, operating under the premise that it is better to have more data than less.
When individual elements of non-personal data are combined together, it is possible that, by combining this innocuous data, unique profiles of the individuals they relate to can emerge – patterns that reveal insights from data that was otherwise completely unremarkable. Computers are being designed to process these sorts of datasets, enhancing their ability to build detailed snapshots of us that are unique and deeply sensitive. Machine learning algorithms have been designed to process vast volumes of data and arrive at inferences from their analysis that no human would have come to. As a result, information that was originally nonpersonal is rapidly being transformed into personal sensitive information. Given the way that privacy laws are designed, as long as no personal information is being collected, there is no legal requirement to seek consent. As a result, there are no legal fetters to the use of this data or the processing of it.
To summarise, as well as it has served us over the years, here are three reasons why consent is no longer a feasible means to safeguard privacy:
1. Fatigue: Consent worked as originally conceptualized because there were limited reasons to collect data and few alternative uses to which it could be put. It was relatively easy for a data subject to appreciate the consequences of providing consent. This is no longer the case. Data is collected, processed and used in more ways than we can comprehend. We consent to this extensive data collection by signing standard form contracts that are so complex that it makes them difficult to assess. This, combined with the sheer number of contracts we end up signing, leads to consent fatigue and diminished consent: we end up agreeing to terms and providing consent without actually understanding what we are consenting to.
2. Interconnection: Modern databases are designed to be interoperable – to interact with other datasets in new and different ways. This allows us to layer multiple datasets in combinations that generate new insights but which, at the same time, create privacy implications that no one can truly understand. Given how hard it is to understand the implications of agreeing to a single privacy policy, appreciating the consequences of allowing these various different databases to interconnect is beyond the ability of the consent construct.
3. Transformation: Machine learning algorithms can take elements of non-personal data and make connections between them by spotting patterns and building complex personal profiles, transforming them in the process into deeply personal, often sensitive, data. Since there is no need to seek prior consent to collect or process non-personal data, relying exclusively on consent as our only protection against privacy violation is ineffective against the harms that can result from the use of these algorithms.
The world is currently suffering from a deep and pervasive data asymmetry. Data subjects have no idea what is being done to their data, where it is being stored and what processes are being applied to it. All that information lies in the hands of the controllers who not only collect as much data as they can, but process it in so many different ways that it has become impossible for us to truly understand what effect that processing is going to have on us. And still our legal system expects the data subject to be able to determine what needs to be done to safeguard his own privacy.
This hardly seems appropriate. How can a data subject be expected to be held to the consent he has provided when it is impossible for him to fully understand the implications of giving such consent.
Surreal yet gritty, violent yet poetic – such is the world of Chandan Pandey’s fiction.…
From books on financial frauds to a graphic novel based in Gaza, Mitali Mukherjee reading…
Mitali Mukherjee's chilling and unputdownable new book Crypto Crimes traces the murky underbelly of the…
Hey there readers! For today’s blog we have brought you a collection of finance and…
Hi readers! In this week’s blog we have some Young Adult recommendations perfect for teens.…
is proud to announce the release of THE BLACK ORPHAN Inspired by true events,…