Algorithms that can learn themselves without relying on training data will address our concerns around privacy and historical bias
As much as we have, in the past few years, witnessed significant improvements in artificial intelligence, the methods by which these gains have been achieved have relied on feeding algorithms vast databases of existing information and guiding them so that they can infer patterns from the data that could be applied to future decision making. Consequently, none of these so-called “artificial intelligences” are capable of learning tabula rasa (from a clean slate) and instead learn how to solve problems based on how humans have solved them before. Humans, on the other hand, have the ability to learn from a completely clean slate—acquiring knowledge from experience and perception without depending on inherited knowledge or memory. This is a uniquely human condition and one that we have long believed machines cannot emulate.
The trouble is that this puts significant constraints on how artificial intelligence can be used. Anyone who looks to apply artificial intelligence to a new area not only needs to find the human expertise required to program these algorithms but also needs to obtain a large enough database of relevant information that will be able to train these neural networks. These are not trivial constraints and it has long been the goal of AI research to bypass training altogether by developing algorithms that can learn from experience in a rule-based environment.
An article in Nature magazine last week offered initial evidence that we had finally managed to develop the world’s first tabula rasa algorithm.
The paper was published by DeepMind, the same team that had built AlphaGo—the computer program that defeated the human world champion at Go. In its original version, the AlphaGo neural network had been fed thousands of amateur and professional games which it analysed to detect patterns in the ways in which humans played and won.
Using this knowledge and the full extent of its vast and perfect memory, AlphaGo was able to defeat Lee Sedol, the 18-times world Go Champion. The latest version (called AlphaGo Zero) doesn’t use training data. Instead, it uses a novel form of reinforcement learning that operates using methods epistemologically similar to human learning.
AlphaGo Zero started out as a neural network that had no prior knowledge of the game. It was made to play against itself, constantly tuning and updating its logic, with each iteration improving incrementally, the nature of the gameplay—while at the same time creating a more responsive version of the neural network. Using this self-learning technique, the artificial intelligence is longer constrained by the limits of human intelligence or the volume data collected from past games but instead learns, tabula rasa, from the strongest Go player in the world—itself.
Within just three days, AlphaGo Zero achieved enough mastery in the game to be able to comprehensively defeat the earlier version of AlphaGo by 100 games to 0. By learning from scratch and constantly playing against itself, it had managed to accumulate thousands of years of human knowledge within a few days and in the process had uncovered a number of unconventional strategies and novel techniques never before seen in the history of Go.
This new machine learning technique is promising in the ways in which it could be applied to other structured problems like protein folding, genomic research and the search for revolutionary new materials. But what is far more interesting is its implications for law and regulation.
The biggest concern with modern artificial intelligence is the fact that we need to use historical data in order to train the algorithm. These data sets either contain data that could be viewed as personal, or which could, through the operations of the algorithm, be transformed into sensitive personal information. This makes the collection and use of datasets for training purposes quite challenging under current data protection law which does not allow data to be used for any purpose without the prior consent of the data subject.
In addition, there is a concern that all training data is already rife with the sorts of human biases that are unavoidable in historical data. When the algorithms are trained using this data, the biases get transferred into the algorithm resulting in the perpetuation and even exaggeration of bias in data-driven decision making, raising concerns that the real purpose of having incorruptible machines take decisions instead of fallible humans was not being fulfilled.
With tabula rasa algorithms, there will be no need to use historical data, significantly reducing both these concerns and making it possible to implement algorithmic decision making without worrying about privacy or the risk that human bias will get transferred to these new machine learning algorithms. We might, finally, be at the threshold of true artificial general intelligence. And it’s not as scary as thought it would be.
Rahul Matthan is a partner at Trilegal. Ex Machina is a column on technology, law and everything in between. His Twitter handle is @matthan
Editor's Picks »
- Motherson Sumi continues to face margin pressure in foreign markets
- What the Warren Buffett indicator tells us about market valuations today
- Jet Airways lands with a thud in Q4 as fuel costs increase
- IBC amendments: Some dilutions, and a lot more speed
- Patanjali’s gambit is paying off in toothpaste wars