Bengaluru: Kay Firth-Butterfield, Head of Artificial Intelligence and Machine Learning at the World Economic Forum (WEF), has worked for the past three decades as a barrister, mediator, arbitrator, business owner, professor and judge in the UK. She is also co-founder of the Consortium for Law and Policy of Artificial Intelligence (AI) and Robotics at the Robert E. Strauss Center, University of Texas. In an interview, she shares her thoughts on why ethics should be part of AI design and and why WEF believes that AI will create more jobs than it will displace. Edited excerpts:

There are multiple reports that talk about advances in automation and AI impacting thousands of jobs and there are an equal number of reports that say the scare is unwarranted since AI will also create new jobs. Please share your perspective.

Our colleagues at the Forum have reported that the prognosis for jobs is good. More jobs will be created than are lost from the AI revolution. The challenge for us is to ensure those jobs are open to all and so re-skilling will be necessary. Also, the job creation might not run in sync with the job loss and that will be an area companies and countries will need to work on.

The world is going through a workplace revolution that will bring a seismic shift in the way humans work alongside machines and algorithms, according to new research by WEF. By 2025 more than half of all current workplace tasks will be performed by machines as opposed to 29% today. Such a transformation will have a profound effect on the global labour force. However, in terms of overall numbers of new jobs the outlook is positive, with 133 million new jobs expected to be created by 2022 compared to 75 million that will be displaced. Based on a survey of chief human resources officers and top strategy executives from companies across 12 industries and 20 developed and emerging economies (which collectively account for 70% of global GDP), the report finds that 54% of employees of large companies would need significant re- and up-skilling in order to fully harness the growth opportunities offered by the Fourth Industrial Revolution.

At the same time, just over half of the companies surveyed said they planned to reskill only those employees that are in key roles while only one third planned to reskill at-risk workers. While nearly 50% of all companies expect their full-time workforce to shrink by 2022 as a result of automation, almost 40% expect to extend their workforce generally and more than a quarter expect automation to create new roles in their enterprise. Within the set of companies surveyed, respondents predicted a decline of 984,000 jobs and a gain of 1.74 million jobs between now and 2022.

Extrapolating these trends across those employed by large firms in the non-agricultural workforce of the 20 economies covered by the report suggests that 75 million jobs may be displaced by a shift in the division of labour between humans, machines and algorithms, while 133 million new roles may emerge that are more adapted to this new division of labour.

Machine Learning is making algorithms more opaque and powerful. As an example, despite initiatives like Explainable AI, most unsupervised deep learning algorithms remain black boxes. How is the WEF working towards building a consensus to make AI more transparent?

All of our projects consider and include the big ethical issues of AI – safety, privacy, accountability, transparency, and bias. For example, we have just created guidelines with the UK government and a multi-stakeholder group around procurement of AI solutions by government. These guidelines, released by the UK in September 2019, build upon the UK’s data ethics framework but go further. They require procurement officers to think about issues such as responsible design development and use of AI, including transparency. There was a multi-stakeholder team working with the UK government on these guidelines. It was important to get voices from business, government, startups and NGOs to provide their perspectives.

Where, in your perspective, does India fit when it comes to AI implementation? Would you consider the Indian government and NITI Aayog's AI policy steps in the right direction?

India is focusing on leveraging AI in solving the most pressing societal challenges in sectors like healthcare, agriculture, smart cities. The government's AI policy takes steps in the right direction. We need to focus more on building the research and development in AI, and also build a multi-stakeholder model engaging public, private, academia together to solve for the biggest societal challenges in India.

Compromising data and privacy are two key issues that come to the fore when talking about AI. Are companies and governments doing enough to address these issues?

Governments and businesses all agree they could do more to address the challenges of the fourth industrial revolution. The good news is that many are taking a proactive approach. I mentioned the government work with the UK above, we are about to scale it to UAE, Bahrain, Colombia and more. Most governments we talk to are interested in this project, which allows the development of an AI economy whilst letting the government tell companies, ‘this is what we expect from you’. We have just released a white paper which helps countries to create a National AI Strategy which we believe is essential to success in the use of AI by countries. There has been massive interest from companies in our Empowering AI Leadership project. In this work we have created a Toolkit for the use of Boards so that they can correctly help the c-suite to use AI for the good of the company and minimise negativities to company and customers.

You believe that to realise the full potential of AI, we must regulate it differently. How should individuals, companies and governments go about this task?

It is often said that Silicon Valley moves fast and breaks things, whilst this may result in a small number of highly successful companies and entrepreneurs it doesn’t necessarily help society. Brad Smith, co-Chair of the Forum’s Global AI Council suggested that this was not always a good thing in September 2019. Another saying is that regulation impedes innovation. I would say, only if the regulation is poorly thought through or executed. On the other hand, regulation is there to protect the public. Thus, for a successful 21st century, and beyond, we need to find a way to accurately balance the need to governments to protect their citizens against their desire to help their citizens using AI.

In this context, what's your take on initiatives like OpenAI?

It is important that we all share learnings in AI. AI should be used for the benefit of all and not just a privileged few. Thus, helping everyone to be able to use AI is very worthwhile, provided, they are also given ethical training, for example like the work of AI4All, a none profit which helps girls from disadvantaged background to learn to use AI.

Do you believe AI will ever become sentient?

I think that very many people have different answers to this question. The answer is that we are all guessing when we answer this question. What is probably better said is that because some of the world’s leading AI scientists believe this is true, it is all the more important that we put in place now foundational governance measures rather than trying to leap over that step and start using AI to solves our difficult problems, we might be setting ourselves up for more difficult problems by not things now. Professor Russell has been dong important thinking in this area.

How does your background in law help you deal with AI scientists who mostly have a different world view, or believe that they must primarily focus on the technology while policy is dealt with separately by other academics and governments?

Four large parts of the job of a lawyer are negotiation, persuasion, being able to see the point of view of others and helping parties come together and understand each other. In the work which we do, I certainly find myself using all of those skills. I think it is important to also note that many universities are now beginning to teach the social impact of AI to their scientists and so it is now less a matter of two perspectives than it might have been in 2014 when I started working in this area. At the Forum we have a project to help Professors of AI from around the world to come together and share curricula on the social impact of/ethics of AI so that they can includes classes and course for their students. It is important to think about the ethical implication of AI, especially when you are designing it. If we have a diverse set of voices creating AI solutions and tools, the better off we will be. AI should reflect the diverse society that we have.

Close