Should we fear artificial intelligence?
AI machines are no match for the super-intelligent AI machines like Skynet, androids and cyborgs that we get to see in sci-fi movies, but that may not be the case for long
If you are a fan of sci-fi movies like I, Robot, The Terminator or Universal Soldier, you may have asked yourself: “Will machines outsmart humans?” With rapid advancements in artificial intelligence (AI), the short answer is, “Yes”.
So will these intelligent machines rule over humans? The answer to this question, however, is not easy because much will depend on how you approach, or address, the issue.
The late John McCarthy, an American computer and cognitive scientist who coined the term AI in 1955, defined it as “the science and engineering of making intelligent machines”. AI adopts an interdisciplinary approach. Other than computer science, it also relies on subjects like math, psychology, linguistics, philosophy and neuroscience.
Currently, AI machines are no match for the super-intelligent AI machines like Skynet, androids and cyborgs that we get to see in sci-fi movies. However, that may not be the case for long.
Some of the most exciting advances in AI have come because of convolutional neural networks, defined as large virtual networks of simple information-processing units, loosely modelled on the anatomy of the human brain.
According to a 3 February press release, researchers from the Massachusetts Institute of Technology (MIT) presented a new chip at the International Solid-State Circuits Conference in San Francisco, US, that was designed specifically to implement neural networks—10 times as efficient as a mobile GPU (graphics processing unit)—to enable devices to run powerful AI algorithms, rather than upload data for remote processing. MIT researchers used the new chip, dubbed “Eyeriss”, to implement a neural network that performs an image-recognition task, but they believe it could also help usher in the Internet of Things—where smart sensors allow machines to talk to each other.
AI experts also predict that intelligent and semi-intelligent autonomous systems such as self-driving cars and autonomous drones “will march into our society” in the next two-three years, according to a 6 February briefing at the 2016 American Association for the Advancement of Science Annual Meeting.
With more than a billion dollars spent last year on AI research (as opposed to the “AI Winter” period of reduced funding), the experts also forecast that AI advances may threaten jobs and uncover a range of legal, regulatory and ethical issues.
While Moshe Vardi, professor of computer science and director of the Ken Kennedy Institute for Information Technology at Rice University, US, expects the growing presence of intelligent machines in workforces to contribute to a phenomenon called “job polarization”, Wendell Wallach, ethicist and chair of Yale University’s Interdisciplinary Center for Bioethics and the Hastings Center, advocates “concerted action to keep technology a good servant and not let it become a dangerous master”.
Technology luminaries such as Bill Gates, Elon Musk, even physicist Stephen Hawking, have expressed fear that robots with AI could rule mankind.
But there are those who believe that AI machines can be controlled. Marvin Lee Minsky, who died in January, was an American cognitive scientist in the field of AI and co-founder of MIT’s AI laboratory. A champion of AI, he did believe that some computers would eventually become more intelligent than most human beings, but hoped that researchers would make such computers benevolent to mankind.
Raymond “Ray” Kurzweil, an American author, computer scientist, inventor and futurist, has sought to allay such fears by pointing out that we can deploy strategies to keep emerging technologies like AI safe, and underscoring the existence of ethical guidelines like Isaac Asimov’s three laws for robots, which can prevent—at least to some extent—smart machines from overpowering us.
In an interview with Mint this month, Brad Templeton, the networks and computing chair at the Singularity University in Silicon Valley, expressed comfort with the idea that humans would, at some point, be able to make machines or AI stuff but added that one may liken this situation to many children surpassing their parents in intelligence. He said his best hope is that these “children of the mind” would continue to “love us the way children love their parents”.
Given that there is no stopping companies and governments from creating and deploying AI machines, we can only hope that we humans devise equally smart policies to govern the behaviour of smart machines.
Cutting Edge is a monthly column that explores the melding of science and technology.
Editor's Picks »
- IL&FS unit chief resigns amid default crisis at group
- Tata Steel to buy Usha Martin’s steel business for up to Rs 4,700 crore
- Apple’s new smartwatch will monitor your heart only in the US, not in India or elsewhere
- Are 5 year-old smartphones still relevant today?
- OPEC and allies struggle to pump more oil as Iran supply falls
- India’s renewable energy sector hits a milestone but loses speed
- All eyes now on share swap ratio in this mega bank merger
- Jet Privilege can actually get higher valuation than Jet Airways
- Profitability of cement firms to take a hit due to weak prices, high costs
- Pidilite’s shares hold their ground despite weak rupee and rising crude