Active Stocks
Tue Apr 16 2024 15:59:30
  1. Tata Steel share price
  2. 160.05 -0.53%
  1. Infosys share price
  2. 1,414.75 -3.65%
  1. NTPC share price
  2. 359.40 -0.54%
  1. State Bank Of India share price
  2. 751.90 -0.65%
  1. HDFC Bank share price
  2. 1,509.40 0.97%
Business News/ Ai / Artificial-intelligence/  The rise of Artificial Intelligence and impending takeover
BackBack

The rise of Artificial Intelligence and impending takeover

AI-driven machines are predicted to be better than us at driving a truck by 2027, writing a best-seller by 2049 and performing surgery by 2053

Illustration: Jayachandran/MintPremium
Illustration: Jayachandran/Mint

It was seven minutes to ten o’clock in the morning, and it was the only good thing that had happened." [09:53:46] “A patch of green grass seemed to be seeking its face, but it was not much to see..." [10:36:11]

If you get the feeling that these sentences could have been better structured, it’s simply because these seemingly disparate, literary threads have been stitched into a novel by an algorithm. That’s also why the human author of this novel, Ross Godwin, calls himself ‘writer of writers’. He is an artist and creative technologist at Google, and also a former Obama administration ghostwriter.

In March 2017, Godwin fitted a Cadillac car with a surveillance camera, global positioning system (GPS) unit, microphone and clock, and connected these devices to a portable artificial intelligence (AI) writing machine that fed on these input data in real-time. As Godwin travelled from New York to New Orleans with these connected contraptions, his machine’s printer published long scrolls of receipt paper that filled the car’s rear seats over the course of the journey—publishing a manuscript, line by line.

The novel, now aptly named The Road, was published this year and is priced at €24. However, if you’re distraught that even an AI algorithm can write a novel, take comfort in the fact that it is nowhere close to resembling the work of a good human author. Not yet.

That said, AI is undoubtedly becoming smarter with rapid advancements in machine learning (ML) and deep learning algorithms, humongous amounts of data called Big Data on which these algorithms can be trained, and the phenomenal increase in computing power.

According to a recent Oxford and Yale University survey of over 350 AI researchers, machines are predicted to be better than us at translating languages by 2024, writing high-school essays by 2026, driving a truck by 2027, working in retail by 2031, writing a book by 2049 and surgery by 2053.

If you think these predictions are far-fetched, consider the fact that sports writing was automated by Stats Monkey (now called Narrative Science), a software developed by students and researchers at Northwestern University’s Intelligent Information Laboratory, way back in 2009. By 2015, Associated Press’ AI system was writing over 4,000 quarterly earnings stories, and has ostensibly freed up reporters to write more in-depth stories on business trends.

Besides, art work done by AI algorithms are beginning to get published. On 23 October, for instance, the world’s largest auction house Christie’s London branch will put on sale a print on canvas—the product of an algorithm developed by the French art collective Obvious.

The work was created using a model called a Generative Adversarial Network, or GAN, which typically generates data from scratch, primarily images. The artists first fed the GAN a data set of 15,000 portraits done between the 14th and 20th centuries, following which the algorithm created new works based on the training set until it was able to fool a test designed to distinguish between human and machine-made images. The resulting work, titled Portrait of Edmond de Belamy, depicts a man in a dark coat and white collar with indecipherable facial features that resides somewhere in the uncanny valley. The unique piece, a gold-framed canvas print that is currently on view in Christie’s London showroom, is estimated to cost between $7,000 and $10,000.

India, too, had held its first AI art exhibition at Nature Morte in New Delhi on 17 August. Titled Gradient Descent, the show exhibited works of global artists who are melding their skills with AI to birth new artworks.

Moreover, in the gaming arena, a little over a decade ago, a supercomputer—IBM’s Deep Blue—defeated the then world chess champion, Gary Kasparov. Two-and-a-half years ago, DeepMind’s computer program, AlphaGo, beat Go champion Lee Sedol. Late last year, Alphabet Inc.-owned AI firm DeepMind’s AlphaZero—modelled on the company’s AlphaGo Zero computer program—not only learnt from AlphaGo, the world’s strongest player of the Chinese game Go, but also defeated it. The AlphaZero algorithm used reinforcement training, an unsupervised deep learning training method that uses rewards and punishments, to become “its own teacher".

Frankenstein algorithms

If research and advisory firm Gartner Inc. is right in its forecast, AI technologies will become pervasive in almost every new software product and service by the year 2020. The growth in AI is also being driven by advances in ML as well as deep learning. ML, a subset of AI, is broadly about teaching a computer how to spot patterns and use mountains of data to make connections, without any programming, to accomplish specific tasks. A recommendation engine is a good example.

Deep learning, an advanced ML technique, uses layered (hence “deep") neural networks (neural nets) that are loosely modelled on the human brain. Neural nets enable image recognition, speech recognition, self-driving cars and smart home automation devices, among other things.

A neural net comprises thousands or even millions of simple processing nodes that are densely interconnected. An individual node might be connected to several nodes in the layer beneath it, from which it receives data, and several nodes in the layer above it, to which it sends data.

Broadly, neural nets work thus. Neurons receive inputs layer by layer. The neurons in the first layer, for instance, perform a calculation and then send it (the output) to the neurons in the next layer, and so on, until there is overall output. There is also a process known as back-propagation, which tweaks the calculations of individual neurons to allow the network to learn to produce a desired output. Researchers, though, continue to be perturbed by the fact that neural nets are ‘black boxes’—once they have been trained on the data sets, even their designers rarely have any idea how the results are generated. This has also given rise to the name ‘franken’ (after Mary Shelley’s Frankenstein) algos.

For instance, in March 2016, Microsoft Corp.’s Tay AI chatbot generated racist tweets, forcing the company to apologize and pull the bot down. In June 2017, two AI chatbots developed by researchers at Facebook Artificial Intelligence Research (FAIR), with the aim of negotiating with humans, began talking with each other in a language of their own since the rules of the English language did not suit the bots.

Consequently, Facebook shut down the program, prompting some media reports to conclude that this was how sinister AI would look when it becomes super-intelligent. Facebook, however, clarified that the program was aborted because it no longer served the purpose of what the FAIR researchers had set out to do—i.e. have the AI bots talk to humans.

On 18 June, an AI system engaged in the first-ever live, public debate with humans. At an event held at IBM’s Watson West site in San Francisco, a champion debater and IBM’s AI system, Project Debater, began by preparing arguments for and against the statement: “We should subsidize space exploration." Both sides then delivered a four-minute opening statement, a four-minute rebuttal, and a two-minute summary. IBM’s Project Debater aims at helping “people make evidence-based decisions when the answers aren’t black-and-white".

Life and death by AI

In the world of driverless cars where multiple systems interact and conditions change over time, an AI algorithm will have to do a lot of explaining if a vehicle takes a wrong turn or bumps into someone or knocks down someone, or worse still, kills people.

On 18 March, for instance, a driverless car operated by Uber struck and killed a woman on a street in Tempe, Arizona, despite having an emergency backup driver behind the wheel. Following the incident, Uber suspended testing in Tempe as well as in Pittsburgh, San Francisco and Toronto. Waymo, an autonomous car company from Google’s parent company Alphabet, has been testing a fleet of self-driving vehicles without any backup drivers on public roads since November 2017. Other companies that are experimenting with driverless cars include Tesla Inc., Nissan Motor Co. Ltd and General Motors Co. Similar would be the case if a doctor tells a patient that s/he has cancer because an algorithm has detected it but can’t explain the ‘Why’ of it.

Hence, companies and governments are slowly gravitating towards a concept called ‘Explainable AI’ (XAI), also referred to as transparent AI, which has the backing of the US-based Defense Advanced Research Projects Agency (DARPA).

New machine-learning systems, according to David Gunning, programme manager at the Darpa Information Innovation Office (I2O), “will have the ability to explain their rationale, characterize their strengths and weaknesses, and convey an understanding of how they will behave in the future".

We are already seeing steps being taken in this direction. Indian IT services provider Wipro Ltd, for instance, has its HOLMES AI platform and insists that Explainable AI is the way ahead for its clients. On 19 September, IBM said it was introducing a “...fully automated software service which explains decision-making and detects bias in AI models at runtime—as decisions are being made—capturing potentially unfair outcomes as they occur". “Importantly, it also automatically recommends data to add to the model to help mitigate any bias it has detected," the company said in a press release.

IBM’s insistence on Explainable AI stems from the fact that it uses Watson for oncology and patients and doctors do not like it when they do not know how an AI algorithm has arrived at a specific decision. A good Explainable AI model should be able to address bias in the training data, the model, or simply be able to detect a human-induced bias. It should also give humans a right to appeal against a particular decision (assuming that the algorithm has done the explaining). Last, the biased training model should be improved upon once the bias is detected.

Of course, even if self-learning algorithms are forced to explain their conclusions, they can justify their biases the way humans do. This is why a 2016 Massachusetts Institute of Technology paper on Rationalizing Neural Predictions suggests that AIs with “sufficiently advanced mental states" should have a “moral status, and some may count as persons", though perhaps governed by different rules.

In his 2006 book The Singularity is Near, American author and futurist Ray Kurzweil predicted, among many other things, that AI will soon surpass humans. By 2099, he forecasted that machines would have attained equal legal status with humans, harking back to movies like the Bicentennial Man, starring the late Robin Williams, where a humanoid-turned-human is granted the status of a human by the courts. AI experts will tell us that no such thing is likely to happen in the near future. Nevertheless, given the rapid advancements in the technology, it is better to be safe than sorry.

*****

The pursuit for superintelligent machines

A file photo of a self-driving car after a high-impact crash in Arizona, US. Photo: Bloomberg
View Full Image
A file photo of a self-driving car after a high-impact crash in Arizona, US. Photo: Bloomberg

Most of the artificial intelligence (AI) we see around caters to narrow specific areas, and hence is categorized as “weak AI". Examples include most of the AI chatbots, AI personal assistants and smart home assistants that we see, including Apple’s Siri, Microsoft’s Cortana, Google’s Allo and Amazon’s Alexa.

Driverless cars and trucks, however impressive they may sound, are still higher manifestations of “weak AI". In other words, weak AI lacks human consciousness. Moreover, though we talk about the use of artificial neural networks (ANNs) in deep learning—a subset of machine learning—ANNs do not behave like the human brain. They are loosely modelled on the human brain.

Why, then, do we fear AI? Why fear a Skynet-like neural net-based conscious group mind and artificial general intelligence system that is currently fiction and only features in the Terminator franchise? Part of the reason is that most of us confuse “weak AI" with “strong AI". Machines with “strong AI" will have a brain as powerful as the human brain. Such machines will be able to teach themselves, learn from others, perceive, emote—in other words, do everything that human beings can do and more. It’s the “more" aspect that we fear most.

Strong AI—also called true intelligence or artificial general intelligence (AGI)—is still quite a way off. Some use the term Artificial Superintelligence (ASI) to describe a system with the capabilities of an AGI.

One, however, can take hope from the words of the late Marvin Lee Minsky, cognitive scientist and co-founder of the MIT’s AI laboratory, who once said, “When intelligent machines are constructed, we should not be surprised to find them as confused and as stubborn as men in their convictions about mind-matter, consciousness, free will, and the like."

*****

Unlock a world of Benefits! From insightful newsletters to real-time stock tracking, breaking news and a personalized newsfeed – it's all here, just a click away! Login Now!

ABOUT THE AUTHOR
Leslie D'Monte
Leslie D'Monte specialises in technology and science writing. He is passionate about digital transformation and deeptech topics including artificial intelligence (AI), big data analytics, the Internet of Things (IoT), blockchain, crypto, metaverses, quantum computing, genetics, fintech, electric vehicles, solar power and autonomous vehicles. Leslie is a Massachusetts Institute of Technology (MIT) Knight Science Journalism Fellow (2010-11), author of 'AI Rising: India's Artificial Intelligence Growth Story', co-host of the 'AI Rising' podcast, and runs the 'Tech Talk' newsletter. In his other avatar, he curates tech events and moderates panels.
Catch all the Business News, Market News, Breaking News Events and Latest News Updates on Live Mint. Download The Mint News App to get Daily Market Updates.
More Less
Published: 23 Oct 2018, 08:39 AM IST
Next Story footLogo
Recommended For You
Switch to the Mint app for fast and personalized news - Get App