4 min read.Updated: 28 Jan 2019, 10:26 PM ISTSiddharth Pai
While some are hell-bent on hurting robots, it seems as if others want to humanize them completely
Kai-Fu Lee, who is of Taiwanese origin and a former executive at Apple, Google and Microsoft, now runs a venture capital firm that has funded more than 140 artificial intelligence (AI) startup ventures, many of them in China. Taking the analogy that “data is the new oil" forward, he claims that China is the “Saudi Arabia" of data, given its billion or so people who are online and who have very few privacy rights. In a recent interview with CBS, Lee said that he expected about 40% of the world’s jobs would be displaceable in about 15 years by the advances made in AI. In a refrain repeated by many, he says that it’s not just blue-collar work that is displaceable, but many white-collar jobs as well.
Lee is more circumspect when it comes to something he terms “Artificial General Intelligence" or AGI. When asked by CBS about when we will know whether a machine can actually think like a human, he responded by saying that the bar is getting ever higher for basic repetitive tasks that are being conquered by large-scale “Big Data" solutions which sift through ever increasing amounts of data to find usable patterns, whether for self-driving cars or for other applications. A few years ago, he says, one would have said: “If a machine can drive a car by itself, that’s intelligence." Now, Lee says, that’s not enough. When it comes to AGI however, he says that it will not be possible within the next 30 years, and maybe never.
Lee says this is because he believes in the sanctity of the human soul and that there are a lot of things about humankind that we don’t understand. He says love and compassion are not explainable in terms of neural networks and computation algorithms, and he currently sees no way of solving for them. True enough. Joseph Campbell, famous for his work in comparative religion, once said: “I have bought this wonderful machine—a computer. Now, I am rather an authority on gods, so I identified the machine—it seems to me to be an Old Testament god with a lot of rules and no mercy."
Love and compassion seem absent in computer-algorithm-run marketplace organizations such as Uber, Ola, and others. Drivers rarely actually see a human being from the company. Their entire contact with the firm is limited to the app on their smartphones that links them to the taxi-hailing businesses. They are, in effect, working for robots—or algorithms. These algorithms are ruthlessly efficient and devoid of feelings. If the numbers don’t add up, you’re kicked out.
Despite this, a new survey that was presented at the recent World Economic Forum in Davos found that most workers remain very optimistic that at least some aspects of their jobs are too advanced and complex for robots to replicate. About 80% of the people who responded to the survey believed that only another human being could perform most or all of their job. There were variances in the responses among countries but, nonetheless, people’s overall outlook towards advanced technology remains sanguine.
In my opinion, this may be because humans the world over are only vaguely aware of technology’s incessant march, much as they are only vaguely aware of the risks of unbridled capitalism. We accept both almost automatically, in return for the supposed conveniences that they bring. Most importantly we have no way to “individuate" technological advances such as the algorithmic middle-manager that distributes tasks such as ride dispatch at taxi-hailing marketplaces to a single person or group of persons.
Interestingly, however, it seems that when technological advances find expression in the form of “individuated" humanoid robots, things begin to get very different. When such “individuation" occurs, it would appear that it is not just love and compassion that are characteristics of the human soul. Cruelty also begins to appear as an integral part of the human condition.
Evidently, once people can focus their vague fears about technology onto a specific focal point that resembles a human being in some way, their behaviour begins to change in unpredictable ways. This twist appears to be playing itself out in the way that human beings sometimes treat humanoid robots, which are built to resemble humans.
A New York Times article last week discussed how people lash out violently at humanoid robots. It claims that such violence is a global phenomenon and, to make this point, counts instances of a beheaded robot in Philadelphia and a battered robot in Osaka, as well as a teaching robot in Moscow that was beaten to a pulp with a baseball bat while it pleaded for help. The paper quotes Agnieszka Wykowska, a cognitive neuroscientist at the Italian Institute of Technology and the editor-in-chief of the International Journal of Social Robotics, who says that human antagonism towards robots often resembles the ways that humans hurt each other. The abuse of humanoid robots, she said, might stem from the tribal psychology of “insiders and outsiders" where insiders violently repel infiltration attempts by outsiders. Maybe, deep down, we are more like Campbell’s “Old Testament gods" than we realize.
While some people are hell-bent on hurting robots, it seems as if others want to humanize them completely. Saudi Arabia granted citizenship to a robot some months ago. Other people want to marry robots. One wonders what matrimonial ad columns, or allegations of spousal battery, might look like 40 years from now.
Siddharth Pai is founder of Siana Capital, a venture fund management company focused on tech