Automation will create new needs, new jobs, says Luciano Floridi
- Opening bell: Asian markets open higher; PSU banks, Future Group in news
- Temporary staffing firms seek rapid growth through a spree of acquisitions
- Jet Airways: cost reduction isn’t good enough?
- Edible oil duty hike doesn’t spoil investor appetite for packaged food stocks
- Company earnings estimates continue to be cut after September quarter results
What will the world of technology look like 30 years from now? Megatech: Technology In 2050 tries to tackle this question. Edited by The Economist’s executive editor Daniel Franklin, the book is a collection of essays by eminent personalities like Frank Wilczek, Alastair Reynolds, Nancy Kress and Melinda Gates—each one of whom tells their version of the future. An essay by Luciano Floridi, professor of philosophy and ethics of information at the University of Oxford in the UK, talks about Artificial Intelligence (AI). In “The Ethics Of Artificial Intelligence”, he says the threat of monstrous machines dominating humanity is imaginary, but the risk of humanity misusing its machines is real. In an email interview, Prof. Floridi talks about how real, or not, the threat of AI is. Edited excerpts:
Is AI a threat to human jobs?
Yes, in the simple and yet important sense that AI applications are now challenging white-collar jobs everywhere. However, we need to remember that many other jobs are going to be in great demand. Let me point out some evidence. The automotive industry is one of the most heavily (and earliest) automated sector, and yet jobs in the US have grown since 2009 to almost back to where they were in 2007. In Germany, the demand for engineers is higher than the supply. The same holds true in the UK.
And a report by the World Bank estimates that by 2030 the world will need 80 million healthcare workers, double the number in 2013.
Clearly things are more complicated. Automation will create new needs and new jobs, and make uneconomical jobs economical. This does not mean than millions of people will not feel the impact of AI. Society needs to intervene to alleviate this radical transition.
Where does AI score over humans besides storing and analysing huge amounts of data?
AI scores over humans not just in obviously data-based jobs, like accountancy, but also in any job that can be transformed into tasks that can then be performed handling data. Driving a shuttle bus in an airport is a good example. The more we device ways of translating activities requiring intelligence if a human were to perform them into tasks that require no intelligence but rather the right sort of data, sophisticated algorithms and engineering artefacts like robot arms, the more such jobs will be replaced by AI solutions.
In a world where even our spending patterns are dictated (or anticipated) by the Web, are we giving away too much information about ourselves to smart technologies?
Whether it is too much or too little is a personal question, and I would argue that the problem is one step before and one step after: whether we do this consciously or not, and what society allows people to do with the collected data. Sharing personal information may be a good or terrible idea, giving it to smart technologies may actually facilitate and improve our lives, or make us subject to manipulation and even discrimination. We should be aware of our choices on the one hand, and society should protect them, to avoid abuses, on the other hand. The question in the middle, namely how much information is given away, becomes secondary.
Are there any privacy issues related to AI at the workplace?
Privacy is one of the defining issues of our time. AI will increase its significance, because the more we live a connected life, the more AI will be able to fill the gaps in our profiles, monitor our behaviour, and predict our choices. The trend seems to be unstoppable, technologically. It is the policies and strategies driving it that can be shaped, that is, the point is not what can be done (feasibility) with AI and personal information, but what may (legality) and should (ethics) be done. On the legal and ethical side, we should ensure that the capabilities developed by AI will be at the service of people. In the workplace, this means a protection of the privacy of employees, even over and above what is mere compliance. The possibilities of monitoring and profiling people will increase, it is how we handle them that will make the difference.