Tech sector needs a dose of humility
For those in the technology industry, it’s time to stop perpetuating the myth of technology’s benevolence
There’s a video floating around the internet in which Reid Hoffman, the co-founder of social networking site LinkedIn, is speaking about the promise of social media. In it, Hoffman casually dismisses any concerns around privacy as “old people issues” and argues that the younger generation values transparency and connectivity over a few roadblocks of data privacy. The talk is from 2010, when social networks were becoming mainstream. But Hoffman’s indifference to the misuse of personal data is representative of much of the tech industry even today. Simply put, as product builders and technologists, we are wired to be optimistic about the benefits of our creation.
This optimism has held the industry in good stead. Large established technology firms are frequently upended by younger start-ups. Creative destruction, the central tenet of Joseph Schumpeter’s vision of efficient capitalism, is associated more with the technology industry than with any other sector.
In the past decade, the tech sector has spread its tentacles to all areas of the economy. From retail and automotive to finance and healthcare, every possible industry is being disrupted by technology-driven solutions. As a measure of technology’s growth, consider the 10 largest companies by market capitalization over the years: In 2008, Microsoft was the only technology company in the list, now there are seven.
Yet even as technology firms grow in size, the sector continues to operate with a start-up mindset, most notably the ethos of “move fast and break things”. This motto, made famous during Facebook’s early days, accepts mistakes as normal when innovating in a highly competitive environment. When applications were primarily limited to hardware and software, the fallout from these mistakes was fairly contained. But with technology products having an impact on every aspect of our life, adherence to this philosophy has damaging consequences.
The recent controversy involving Facebook’s user data is perhaps the most visible of these missteps. The ease with which Facebook’s platform was used to manipulate voters should force a rethink among developers and technology leaders on how the sector operates. For one, we need to stop being casual about user data. Most applications collect user data for two reasons. The first is to offer more personalized solutions, much like Netflix’s recommendation engine. The second is to use this data to target ads that you are more likely to click.
While most users have a vague notion that their usage data is being tracked, few take the time to read through mind-numbing legalese of the terms of service. Fewer still are aware that their usage is linked across services through browser cookies and cross-device tracking. By obscuring details on the type of data collected and its use, the tech sector is eroding people’s trust in these services. Services should be designed such that consent on sharing data is simpler, with explicit opt-ins.
As technologists, we also need to temper our expectations from Artificial Intelligence (AI). Over the coming years, AI will be integrated into most services in some form. However the machines are only as intelligent as the data they are trained on. If not properly trained, AI systems have the potential to mimic and indeed magnify the biases that exist in our society today. To understand just how damaging these systems can be, consider a few recent examples. In 2015, Google’s photo app mistakenly classified African-American faces as gorillas. One explanation for the appalling error was that the system was probably trained on images that were overwhelmingly white. A year later, Microsoft’s chatbot had to be shut down in less than a day after users trained it to spew racist comments.
A machine’s biases can also have life-altering consequences. China has proposed a social credit system where every citizen is assigned a score, based on their economic and social behaviour. In some of the pilots so far, information as varied as online shopping patterns, nature of social networks and types of content shared, is used in calculating a person’s “social score”. China may be an extreme example, but as courts, banks and other institutions use intelligent machines to make decisions about your life, the dangers of machines mimicking cultural biases embedded in our society can be catastrophic. The broader the scope of AI, the more sceptical we, as developers of the program, should be of the results. The underlying algorithms shouldn’t be opaque either.
Moderating our faith in data and algorithms is necessary but not sufficient. The tech sector is now too large to operate outside the purview of regulation. If banks are tightly regulated because they are deemed too big to fail, the influence that some technology companies exert over our lives is simply too large to ignore. Laws like the European Union’s (EU’s) General Data Protection Regulation (GDPR) are a step in the right direction. By explicitly specifying requirements on data protection and privacy for all EU individuals, the GDPR forces tech companies to give consumers control over their own data.
Regulation will slow down innovation, but the tech sector hasn’t shown itself capable of operating without adult supervision. And as much as the industry likes to think of itself as an outsider hacking at the system, it’s perhaps time to accept that it is the system.
Shailesh Chitnis is head of product at Compile Inc., a data intelligence company.
Comments are welcome at firstname.lastname@example.org