Fashionably wrong on technology
Latest News »
Although I recently joined a think tank, I do not have the same worry as Arthur Schlesinger, the adviser to President John F. Kennedy. As per Luigi Zingales’ A Capitalism For The People, Schlesinger recalled staying silent during cabinet discussions on the “Bay of Pigs” crisis because he feared being cut out of the inner circle. That fear precludes honest discussions today on many issues that will affect society for a long time to come.
There is a recent interview of Arvind Subramanian, the chief economic adviser to the government of India, in the Financial Times. In the course of the interview, Subramanian remarks: “ ‘Hyper-globalisation is dead, long live globalisation,’ is how I like to put it. If you look crudely at the post-war period, 80 per cent of globalisation is driven by technology, 20 per cent by policy. And that 80 per cent, you can’t stop.” It can be stopped and there is a good case for it.
At a very philosophical level, many things in the world are processes over which humans have very little or no control. We are mere cogs in the wheel. But, modern societies and governments are organized on the principle that humans are in charge. They choose and decide. Technology is deployed and advanced by leaders—political, commercial and scientific—making choices. It does not advance by itself. Some technologies have been shelved and some have been abandoned because their negative externalities were judged to exceed vastly their private benefits or even public benefits.
Several examples will help. The decision by President Richard Nixon to open up to China was a choice. The decision to admit China into the World Trade Organization even before it became a “market economy” was a choice. The decision to repeal the Glass-Steagall Act was a choice. All of them had consequences. Financial and technological innovations amplified the consequences greatly. Some of the decisions were made without awareness of their fallout on communities, on families and on society. Only economic and commercial considerations, at the aggregate level, were the decisive factors.
Bill Gates was right to propose taxing robots. Obviously, robots do not pay taxes but the companies that are behind them do. The tax may and could even be punitive enough to stop some of the research and advance in the technology. That might be a good thing. That is being careful about consequences. That is being honest and humble about forces that one is about to unleash, about which one has no idea and over which one has no control. That would be a recognition of human limitations.
A recent Bloomberg article screamed, “High-tech hubs were among the five metropolitan statistical areas where the gap between the highest- and lowest-income households expanded the most: two in California, San Francisco and San Jose, as well as Austin and Seattle.” Predictably, Larry Summers has objected to Bill Gates’ proposal. Political correctness prevents many from admitting to their inability to comprehend the present and the future, especially with respect to such obviously disruptive developments. Tyler Cowen’s article is another example. He sides with the “correct consensus” despite advancing all useful and important arguments against precisely such a stance. Perhaps, Cowen, Subramanian and Summers should read an article that appeared in Quartz last month. The article is headlined, “No one is prepared to stop the robot onslaught. So what will we do when it arrives?”
The article notes, “In February, the European Union (EU) did consider rules that, while not stopping the robots, would have the force of discouraging automation by compelling companies to pay compensating taxes and social security payments for jobs that their robots wipe out. But, EU parliamentary members balked even at this, adopting much milder language that exacts no retribution on the robots or the companies that use them. A pivotal dynamic in the vote seemed to be a reluctance on the part of the deputies to expose themselves to possible ridicule as Luddites.” That is the problem.
Andrew Feenberg, who teaches the philosophy of technology at the Simon Fraser University in Vancouver, says: “Doing trade deals and robotics without consideration of the people displaced is insane. The backlash is understandable. Societies do have choices with respect to technology.” Phillip Rogaway at the department of computer sciences in University of California (Davis) agrees with him: “Unbridled technological optimism undermines the basic need for social responsibility.”
Pictures of scores of retail outlets closing in downtown New York present a sorry tale. The bill for converting vibrant streets into ghost towns will have to be paid. Unfortunately, it won’t be only Jeff Bezos who will pay for it. It is a classic case of market, economic and social failure. Eleven signatories, nine of whom were Nobel laureates, signed up to the Russell-Einstein manifesto in 1955. Its final passage rings true in the era of no real intelligence: “We appeal, as human beings, to human beings: Remember your humanity, and forget the rest. If you can do so, the way lies open to a new Paradise; if you cannot, there lies before you the risk of universal death.” Universal Basic Income is no match for “Universal Death”. In sum, it may be fashionable but wrong to say that technology cannot be stopped.
V. Anantha Nageswaran is senior adjunct fellow (geoeconomics studies) at Gateway House: Indian Council on Global Relations, Mumbai. These are his personal views.
Read Anantha’s Mint columns at www.livemint.com/baretalk. Comments are welcome at firstname.lastname@example.org