Artificial vs human intelligence: An existential race is underway
Summary
- Whether AI exterminates human civilization will depend on the extent to which we allow such technology to enslave us
In our joint piece in May 2023 (‘How should humans respond to advancing artificial intelligence?’), we explored the power and reach of AI. Still, we expressed concern about its impact on human creativity, decision-making unregulated by human conscience, and the productive utility of human time saved by AI adoption.
We also said that human beings were survivors and would probably outlive the predicted AI-doomsday scenarios. But the release of the movie Oppenheimer brought back thoughts of an ‘extinction risk’. Its director Christopher Nolan said in a Financial Times interview that experts felt AI was their own Oppenheimer moment. Just like the physicist swung between a desire to advance theoretical physics by building a practical prototype and his concern about the bomb’s potential to ‘destroy the world’, AI proponents are excited by its prospects but shudder at its latent ability to wipe out human civilization.
Risk management principles state that even an exceptionally low probability of a large cataclysmic event has to be accounted for seriously, since the expected value of the loss is quite significant. Extinction risk, therefore, necessitates a massive consideration even if optimists accord it a low probability of occurrence.
While implausible but probable risks to our existence loom in the distance, the proximate risks are here for us to address. For India, one of these has to do with employment. The country’s booming service exports depend extensively on cheap labour for low-tech jobs such as software code testing, image and content creation, interpreting lab reports, etc. AI has begun taking over these jobs. Of course, AI will also give rise to new jobs. For example, generative AI has created a new field of ‘prompt engineering’, which refers to the age-old skill of asking the right questions to get the right answers. But, will that be enough?
Déjà vu, eh? Andrew Haldane, former chief economist at the Bank of England, has documented (bit.ly/3OLfyYb) that previous industrial revolutions had indeed affected blue-collar workers who did not always benefit, fully or immediately, from technology adoption by companies. Periods of technological transition were often lengthy as well as painful.
Now, project these effects on a global scale. It throws up troubling possibilities for developing countries. Adopting AI to replace repetitive and lower-end jobs is a serious hurdle to economic convergence between the developed and developing world, just as the covid pandemic has been and climate change continues to be. AI might be a boon for countries with fast ageing populations and falling productivity. In fact, with attitudes against immigrants hardening in several advanced economies, AI is a godsend for them. But, AI adoption would substantially reduce their demand for international workers and outsourced work. This would have a ripple effect on developing countries. As knowledge workers in developing countries could be rendered redundant locally and globally, they face a double whammy. The risk of socio-economic dislocation caused by that is non-trivial. That said, employment in a large country ultimately boils down to numbers: that is, jobs displaced versus new jobs created. The time horizon, too, matters in a democratic polity like ours.
The short run will raise issues that must be tackled for the global good, lest it spawn other social problems. Over the long run, driven by the human instinct for survival, we may be able to adapt to the negative externalities of technology while leveraging benefits for the common good. The question then is: How long is the short run?
If employment is a massive concern, another is narrative control and the perpetuation of existing biases. Asked how India could leverage its demographic dividend, OpenAI’s generative AI ‘chatbot’ ChatGPT responded that many young people in India were unable to find jobs, leading to social unrest. At its current level of sophistication, ChatGPT produces drivel that at best recapitulates most of the pointless negative English-language commentary on India in the global media. Disconnected from reality, or even from the lens of an objective human who has the wherewithal to think across time and space, the chatbot does not acknowledge alternative thoughts on progress in India, be it on literacy, school enrolment, social mores, or employability and skilling.
This is the biggest risk of AI-generated intellectual content. It is going to perpetuate a predominant narrative—the globally dominant view. Its ‘search engine’ may be set to present a particular slant, as is already documented in the case of ChatGPT, which is said to carry the left-leaning biases of its makers (bit.ly/3s15xx6).
The speed with which AI is learning from its own generated content is so rapid that it would soon run out of real-world data to rely on. Generating synthetic data to create ‘synthetic intelligence’ is amusing at best and alarming at worst. If the supremacy of the view of ‘the one who holds the code’ wasn’t enough, there is enough evidence to show that AI often lies (“hallucinates" is the euphemism used by the AI industry). The process by which AI generates new content is based on related text in ‘latent spaces’. So it can throw up related but wrong information (bit.ly/43VUJgL). Dwell on the implications of all this, and one would realize that we risk the homogenization of entrenched social, political and economic narratives with the added problem of potential prevarication over data/facts that may be taken by users as true.
In sum, whether AI exterminates human civilization will depend on who we are as a species and to what extent we allow it to enslave us.
These are the authors’ personal views.
V. Anantha Nageswaran & Aparajita Tripathi are, respectively, chief economic advisor, Government of India; and consultant, ministry of finance.