Powering artificial intelligence sensibly
People were unnerved when Alphabet Inc.-owned Artificial Intelligence (AI) firm DeepMind’s computer programme, AlphaGo, beat Go champion Lee Seedol in March 2016. Here’s some more grist to the AI mill.
In a paper published in Nature magazine on 18 October, DeepMind said AlphaGo’s new version, AlphaGo Zero, is now so powerful that it does not need to train on human amateur and professional games to learn how to play the ancient Chinese game of Go. Further, the new version has not only learnt from AlphaGo, the world’s strongest player of the Chinese game Go, but also defeated it.
AlphaGo Zero, according to the recently published paper, uses a new form of reinforcement training to become “its own teacher”. Reinforcement learning is an unsupervised training method that uses rewards and punishments.
The system begins with a neural network (loosely modelled on the brain, hence the name) that knows nothing about the game of Go. It then plays games against itself, by combining this neural network with a powerful search algorithm. The neural network is tuned and updated to predict moves as well as the eventual winner of the games. This updated neural network is then recombined with the search algorithm to create a new, stronger version of AlphaGo Zero, and the process begins again.
DeepMind hopes that such achievements will bring it closer to its mission of pushing the boundaries of AI, “developing programs that can learn to solve any complex problem without needing to be taught how”.
AI is broadly defined as the desire to replicate human intelligence in machines. However, machines are undoubtedly becoming smarter by the day, with advancements in machine-learning and deep-learning algorithms, humongous amounts of Big Data on which these algorithms can be trained, and the phenomenal increase in computing power.
This has, understandably, given rise to the fear that automation and AI will take away our jobs and eventually become more intelligent than human beings. In his 2006 book The Singularity Is Near, American author and futurist Ray Kurzweil predicted, among many other things, that AI will surpass humans, the smartest and most capable life forms on the planet. By 2099, he forecast, machines would have attained equal legal status with humans.
AI has no such superpower. Not yet, at least. However, machines are indeed becoming more intelligent with narrow AI (handling specialized tasks). Google’s AutoML system, for instance, recently produced a series of machine-learning codes that proved more efficient than those made by the researchers themselves.
In June, two AI chatbots developed by researchers at Facebook Artificial Intelligence Research (FAIR), with the aim of negotiating with humans, began talking with each other in a language of their own. Consequently, Facebook shut down the programme; some media reports concluded that this was how sinister AI would look when it becomes super-intelligent.
However, the scaremongering was unwarranted, according to a 31 July article on the technology website Gizmodo. It turns out that the bots were not incentivized enough to “...communicate according to human-comprehensible rules of the English language”, prompting them to talk among themselves tin a manner that seemed “creepy”. Since this did not serve the purpose of what the FAIR researchers had set out to do—i.e. have the AI bots talk to humans and not to each other—the programme was aborted.
The fact, though, is that a third of people worldwide are now worried about losing their jobs to automation, according to a PricewaterhouseCoopers (PwC) survey. The report acknowledges that while some sectors and roles, and even entire sections of the workforce, will lose out, “others will be created”.
We may take comfort in the fact that companies like Amazon, Apple, Google/DeepMind, Facebook, IBM and Microsoft have founded the Partnership on AI to Benefit People and Society (Partnership on AI), a global not-for-profit organization. The aim, among other things, is to study and formulate best practices on the development, testing and fielding of AI technologies, besides advancing the public’s understanding of AI.
Further, the PwC report says that in an automated world we will still need human workers. The skills needed for the future are not just about science and technology, the report states, adding that human skills like creativity, leadership and empathy will be in demand.
“The secret for a bright future (for individuals) seems to...lie in flexibility and in the ability to reinvent yourself,” says the report. The fact is that individuals, companies and governments will have to understand what AI can and cannot do, as I have argued in my earlier column, and sensibly reskill themselves to face the future.
Cutting Edge is a monthly column that explores the melding of science and technology.