If Facebook users can learn one important lesson from the Cambridge Analytica incident, where the data of nearly 50 million Facebook users was allegedly used deceptively to manipulate the US elections, it is this: There’s no such thing as a free lunch when it comes to sharing personal data like images, posts and preferences (likes, dislikes, etc.) on social networking sites.
Though these sites and apps are purportedly free because they do not charge users, it is a no-brainer that they get their return on investment (RoI) from the mountains of personal data that can be mined with the help of sophisticated algorithms—both to enhance user experience and sell relevant advertisements.
There is a lesson for companies that mine this data too—that they should not shirk their responsibility of protecting users’ personal data from third-party apps.
Further, even though the Facebook-Cambridge Analytica issue pertains to 2014 (the social networking site’s architecture has undergone substantial changes since then) and the 2016 US elections, it does raise questions about how companies capture, analyse, indefinitely store, and share the data with data brokers, marketers, and social media companies. That this data can be misused by third-party apps, resulting in the influencing of elections (as the Facebook issue demonstrates) or by governments to identify, spy and arrest those who are inimical to the interests of nation states, only makes the issue of data protection critical.
On the individual front, gullible users willingly share their personal data with these sites without understanding the consequences. However, even knowledgeable users face a conundrum when signing up for such sites and apps.
For instance, whenever a user downloads an app, it tells you all it is capable of. You have to click on the agree button if you want to avail of the services.
Consider this. The Facebook app, among other things, tells you that it can directly call phone numbers, read (your) phone status and identity, read your text messages (SMS or MMS), take pictures and videos, record audio, record your approximate location (network-based), precise location (GPS and network-based), modify your contacts, read call logs, read your contacts, add or modify calendar events and send emails to guests without owners’ knowledge, read calendar events plus confidential information, read, modify or even delete the contents of your memory card, and add or remove accounts.
Same is the case for most of the thousands of apps that are available on Google Inc.’s Play and Apple Inc.’s App Store. These include the Twitter, WhatsApp, Truecaller, Skype, YouTube, Google+ and Gmail apps.
Some of you may still wonder what the fuss is all about. After all, your smartphone and mobile apps can make you a smart and efficient employee with all the information they collect as a trade-off, similar to how websites and e-commerce sites provide better services with the help of cookies, small pieces of code that track your online behaviour and predict your next move with great accuracy.
Besides, ad networks may gather the information apps collect, including your location data, and may combine it with the kind of information you provide when you register for a service or buy something online to send you targeted ads that may be relevant to someone with your preferences and in your location.
However, as a 2013 Carnegie Mellon University (CMU) study noted, even as “no user of the app should be surprised by any of this (collection of data)", the problem arises when users tend not to pay much attention to these warnings, or lack the context to understand what resources are reasonable for an app to use.
For instance, you will not be able to access most of the features of any such site, or any app for that matter, without parting with your personal data. Second, the stated privacy policies of these sites run into scores of pages. So even you were to ever manage to read a few pages, you will invariably end up signing on the dotted line simply because you see value in being on that site or using that specific app—even if that means compromising your privacy to an extent.
Individuals, for instance, have the option of limiting the use of third-party apps on a site like Facebook. However, this is easier said than done, what with your data being shared by countless other apps in the years that you have been online.
For instance, you may choose to delete Facebook—not a viable solution for most people, given that there are about 250 million Facebook users in India. However, you can definitely take control of your privacy settings though disabling all platform apps (like Farmville, Twitter, or Instagram) will mean that you will not be able to log into sites using your Facebook login.
Herein, though, lies the trade-off of the “privacy by design" as opposed to “consent" model.
Privacy by design effectively means that privacy principles such as preventing harm, transparency, choice, etc., are built into the architecture of the product itself.
Thus, businesses need to include privacy and its related principles at the time of building of the product itself and not as an afterthought. Further, given that privacy by design presumes that the user is central to the entire system, meaningful consent and the real ability to withdraw this consent is another fundamental premise.
Further, in many cases such as Aadhaar where the case in sub judice, quasi-government bodies will consistently pressure you to sign up, failing which you will have to run to the courts to queue up for justice. So you may end up signing up for these services, either because you feel helpless to fight the state or just do not have enough time to fight the system. India desperately needs a separate Privacy Act. The Right to Privacy, as enshrined in the Constitution, does not suffice when it comes to information security.
India, on its part, also lacks a comprehensive policy on data protection or online security—the Indian Information Technology Act (2008) or amended rules in 2011 are not adequate.
The Electronic Frontier Foundation advocates that “tech companies can and should do more to protect users, including giving users far more control over what data is collected and how that data is used. That starts with meaningful transparency and allowing truly independent researchers—with no bottom line or corporate interest—access to work with, black-box test, and audit their systems".
The fact is that data protection cannot be an individual’s lone responsibility. Companies and governments have to proactively create a framework that enables this.
Globally, the European Union (EU) is the most stringent when it comes to data protection. After four years of preparation and debate, the General Data Protection Regulation (GDPR) was finally approved by the EU Parliament on 14 April 2016. The enforcement date is 25 May 2018, and companies that do not comply with this law may face heavy fines. GDPR replaces the Data Protection Directive (95/46/EC) and “was designed to harmonize data privacy laws across Europe, to protect and empower all EU citizens’ data privacy and to reshape the way organizations across the region approach data privacy", according to the GDPR portal.
Countries like the US, Canada, Australia and Thailand have similar—but not as stringent—laws. What about India?
In a 31 December 2017 white paper published by an expert committee appointed by the Indian government, the IT (Reasonable Security Practices and Sensitive Personal Data or Information) Rules, 2011 (SPDI Rules), which were issued under Section 43A of the IT Act, were found to be inadequate when it came to data protection. “It is thus necessary to make a comprehensive law to adequately protect personal data in all its dimensions and to ensure an effective enforcement machinery for the same," the paper notes.
The positive fallout of the Facebook data compromise is that the Indian government, too, is firming up its long-term strategy to secure data of citizens, especially those using social media, according to a 23 March report in Mint.
That said, as sophisticated algorithms increasingly enhance user experience and the bottom line of firms, users must not let their guard down since these very algorithms can enable, as the EFF puts it, “unparalleled invasions of privacy".