Artificial Intelligence or Intelligence Augmentation. What’s in a name?
- Flipkart, Amazon digital marketing strategy paying off: RedSeer study
- US solar makers win ruling on imports, case goes to Donald Trump
- Binani Industries liable to be wound up: Calcutta high court
- Wearable technology is a big opportunity, says Timex CEO Tobias Reiss-Schmidt
- Hewlett Packard Enterprise said to plan about 5,000 job cuts
Even as we try to wrap our heads around the idea of Artificial Intelligence, or AI, and understand its impact on our lives, our businesses and jobs, some experts suggest we may be barking up the wrong tree. The answers to our questions, they believe, may lie in a concept called Intelligence Augmentation, or IA.
One of these experts, Murali Doraiswamy, a professor at Duke University, US, wrote in an opinion piece for the World Economic Forum in January that IA uses machine-learning technologies that are similar to AI, but instead of replacing humans, IA seeks to assist them.
This characteristic, insists Prof. Doraiswamy, may ensure that IA will make more “progress and headlines” than AI. He adds that combining machine learning with the existing power of the human brain can help us get the best of both worlds.
He has a point. On 27 June 2016, the science and technology policy office of the White House requested information on how to utilize AI for the public good. While AI technologies offer “great promise for creating new and innovative products, growing the economy, and advancing national priorities in areas such as education, mental and physical health, addressing climate change, and more...”, the White House said, they simultaneously carry “risks and present complex policy challenges”.
International Business Machines Corp. (IBM), in its response, argued that it was guided by the term “augmented intelligence” rather than “artificial intelligence”.
IBM calls this approach “cognitive computing”, defining it as a comprehensive set of capabilities based on technologies such as machine learning, reasoning and decision technologies; language, speech and vision technologies; human-interface technologies; distributed and high-performance computing; and new computing architectures and devices. When purposefully integrated, IBM believes that these capabilities are designed to solve a wide range of practical problems, boost productivity and foster new discoveries across many industries.
Ginni Rometty, IBM chairman, president and chief executive officer, insists that cognitive computing is “much more” than AI. This is not a distinction that most of us would notice or even care about. IBM understands AI very well, having developed Watson, the supercomputing system that beat champions of the TV quiz show Jeopardy! in 2011.
Rometty insists that while machine learning is good for deciphering patterns, cognitive computing is more comprehensive because it can “reason” over all structured and unstructured data and deal with “grey areas” to help make judgements and decisions.
Consider the example of a data-driven, machine-learning algorithm that can sift through a patient’s medical data and predict an illness. Deep learning that uses Artificial Neural Networks (ANNs) to simulate the human brain can be used to map inputs to predictions. Cognitive computing would imply that doctors would inspect all the medical records, the previous tests that a patient has undergone, images and data from wearables, and “reason” over that data to reach conclusions. In other words, cognitive computing is all about the involvement of a human in the loop, which makes it “augmented intelligence” rather than “artificial intelligence”.
The question, of course, is whether we are splitting hairs. AI is a branch of computer science that takes inputs from linguists and roboticists, and relies on subjects like math, psychology, philosophy and neuroscience. Besides, all the algorithms are trained by humans to interpret data sets. Evidently, the human element is “in the loop”, so to speak.
Keeping humans in the loop may help allay fears that AI technologies will lead to job losses and domination by intelligent robots.
By involving humans, researchers hope to be able to establish trust in AI technologies. In its response to the White House note, IBM points out that we need to trust AI first if we want to reap its benefits. That trust will be earned “through experience”—just the same way we have learnt to trust that an ATM will register a deposit, or that a car will stop moving when the brake is pressed.
Trust, IBM suggests, will also require a system of best practices that can guide the safe and ethical management of AI. Such a system will include alignment with social norms and values and will hold algorithms accountable for their behaviour, compliance with existing legislation and policy, and protection of privacy and personal information. IBM says it is in the process of developing this system in collaboration with its partners, university researchers and competitors.
There are two other noteworthy developments in this field.
One is the Brain-Computer Interface (BCI), which connects the brain with an external computing device to augment or repair human cognition. According to the Scientific American magazine, Facebook is reported to be working on a device that would allow users to type words using a BCI that would be strapped to a user’s head and decode words thought by the user.
The second is the concept of a neural lace—a term coined by the late Scottish writer Iain Banks. Essentially a science-fiction concept that describes a machine interface woven into the brain, it now needs to be taken seriously. That’s because it is being backed by none other than Elon Musk, founder, chief executive officer and chief technology officer of SpaceX and co-founder, CEO and product architect of Tesla Inc. It was only this March that Musk—who believes AI may overpower us some day—confirmed that he is developing technology that may merge human brains and computers. His company is named, aptly, Neuralink.
In the debate over what is more important—IA or AI—it is important to realize that humans are not really being left behind. As computers become more and more intelligent, humans will evolve in parallel with the help of embedded intelligent chips and BCIs.