The economics of privacy in the digital world
The experience of recent years has shown that economic efficiency cannot be prioritized over privacy rights
Our relationship with companies like Google, Facebook and Amazon is at once sweet and sour. These services are made irresistible because of their personalized user experience. The algorithms sitting at the heart of these companies guide you to what is most likely to interest you. But the world is now waking up to the harsh fact that the same process also raises privacy concerns as it encroaches into hitherto intimate areas of our lives.
Information capture sits at the heart of important parts of the digital economy. The transaction in online services is radically different from what we usually encounter: We voluntarily pay in personal data rather than cash. This unique contract creates several complications as far as privacy goes—though this newspaper strongly believes that commercial logic can never be enough to undercut citizen privacy in the digital world.
Digital exchange—personal information for free access to platforms—often means that privacy comes at a cost. Hal Varian, now chief economist at Google, argued in 1996 that customers are better off sharing information about themselves with marketers because it makes life easier. Junk email or unsolicited phone calls that are an annoyance to consumers become less so when the company can target consumers better through data analysis.
However, the potential for personal data to be abused—for discrimination, manipulation and censorship—is a huge cause for concern. The new world of large server farms plus algorithms that sift through data to seek patterns can sometimes make us the victims of targeted manipulation—though to claim people are gullible is an assessment often based on poor respect for the intelligence of the average individual.
So why do people have to share their data with the digital behemoths? The simple answer is that they choose to do so. In some cases, the consent to collect information is presumed, and the degree of privacy the user experiences is a function of self-help; you can disable some surveillance if you can figure out how. In other cases, customers explicitly agree to privacy policies that basically define the control they don’t have. Whether or not to consent is a complicated question, but users succumb to the instant gratification, undervaluing their privacy in the process.
The potential for an individual’s personal data to be used against him is the defining feature of this contemporary privacy debate. The puzzle starts with how companies get to have an ongoing right to private data in the first place; i.e. how is it that the default rule is that the company has the right over user data, and the user can opt out, instead of the user owning the data and the company soliciting them to allow the use of data by opting in? Economists such as Richard Posner have based their defence of this on utilitarian grounds. Since businesses value the data more, imposing onerous “opt-in” rules is a significant transaction cost. This could jeopardize the ability of digital companies to provide services, and significantly degrade user experience. The efficient solution would be to award the initial ownership of data to the business, but let users opt out if they want to.
The experience of the past decade and change has shown that this argument has several flaws. For one, consent is meaningless unless it is informed consent. The structure of digital services and apps today means that for the average user, the latter is often not the case. Admittedly, to what extent fully informed consent would change user choices is debatable—but a granular opt-out model instead of an opt-in model would provide greater security regardless.
For another, looking at the issue more broadly, the characterization of personal data as property clouds its essential nature. If you sell your car, the owner of the car cannot legitimately influence your life after the conclusion of the transaction. Personal data can be used to manipulate people in ways they don’t recognize at the time of sharing their data. This is something that current systems—designed to facilitate the one-time transfer of personal information to the digital company, and without the individual’s subsequent involvement in decision making about the use of the collected data—don’t take into account.
In this context, the framing of intellectual property rights is a good example of an encumbrance to trade that works for everyone. It provides the necessary incentives to the producers, and balances progress with the public distribution of intellectual goods. The same technologies that enable distributed rights management could enable privacy protection that travels with the data.
Traditionally, private property has been the main barrier to privacy invasion. As monitoring and recording capabilities are embedded in our surroundings, there is a need to redefine private spaces that will not be infringed. The government and businesses should start by adopting privacy-by-design principles in their data-accumulation practices. Governments and supreme courts all over the world will have to rethink their stand in order to secure citizens’ privacy and control over their data, and the meaning of such words as “property” and “consent” in relation to personal-data sharing. The drive to accumulate data alone cannot dictate the public debate on privacy.
Can digital companies continue to innovate if rigorous data privacy laws are put in place? Tell us at email@example.com
Editor's Picks »
- Flipkart acquires Israel-based start-up Upstream Commerce
- India among top countries to incur most loss from climate change, finds research
- 57% of regular Indian employees earn less than ₹10,000
- Muslim body moves SC seeking to strike down triple talaq ordinance
- Bids invited for debt-ridden Videocon under insolvency resolution process