Active Stocks
Thu Mar 28 2024 14:39:26
  1. Tata Steel share price
  2. 156.40 2.32%
  1. HDFC Bank share price
  2. 1,457.95 1.20%
  1. ITC share price
  2. 431.35 0.78%
  1. Power Grid Corporation Of India share price
  2. 279.00 2.93%
  1. State Bank Of India share price
  2. 758.75 3.36%
Business News/ Opinion / Could artificial intelligence lose its mind?
BackBack

Could artificial intelligence lose its mind?

The potential for the machine to develop a mind of its own that is in profound disagreement with its human creator is one of the dangers of AI

It can turn mundane images into hallucinatory worlds, and has spawned sites where you can have photos processed with the software and even a mobile app. Photo: BloombergPremium
It can turn mundane images into hallucinatory worlds, and has spawned sites where you can have photos processed with the software and even a mobile app. Photo: Bloomberg

The DeepDream algorithm Google made public this month is a strange offshoot of image recognition technology based on artificial intelligence. It can turn mundane images into hallucinatory worlds, and has spawned sites where you can have photos processed with the software and even a mobile app. Beyond the pretty pictures, however, DeepDream hints at the kind of personality that artificial intelligence could develop quite by accident. Google engineers Alexander Mordvintsev, Christopher Olah and Mike Tyka first wrote about DeepDream in June. They explained how their software recognizes and tags images, and how it is able to distinguish between different images:

“We train an artificial neural network by showing it millions of training examples and gradually adjusting the network parameters until it gives the classifications we want. The network typically consists of 10-30 stacked layers of artificial neurons. Each image is fed into the input layer, which then talks to the next layer, until eventually the “output" layer is reached. The network’s “answer" comes from this final output layer."

DeepDream’s engineers turned the process inside out, “showing" lots of images of, say, a screw or a banana to the neural network and getting it to generate its own images of the objects. DeepDream was trained with animal images, and even in pictures that contained no animals—clouds, a street scene, a face—it “recognized" dogs, birds and deer.

All kinds of animals emerged that weren’t originally there. The artificial neural network appeared to have come down with an acute case of pareidolia—a condition that causes sufferers to perceive nonexistent patterns in images and sounds (such as faces in clouds or a religious images in random objects). We all do this to some degree; otherwise Rorschach tests would be useless. But recent research by Norimichi Kitagawa of the NNT Communication Science Laboratory in Tokyo shows that some people may be more susceptible than others.

In extreme cases, pareidolia can be a symptom of psychosis. Although the images DeepDream produces are visually stunning, they are not the product of a “normal" consciousness by human standards. The Google researchers wrote that “neural networks could become a tool for artists—a new way to remix visual concepts—or perhaps even shed a little light on the roots of the creative process in general." Sure, but it’s easy to imagine how such a network could, in human terms, go off its rocker.

Programmers, of course, can calibrate the network to look for specific patterns and ignore others. But at some point, networks will be more complex than those available today, and I doubt it will be possible to control for all eventualities.

We already have a sense of how unpleasant these accidents can be. In May, Yahoo-owned photo hosting site Flickr and Google Photos introduced artificial intelligence-based autotagging. The Flickr one marked concentration camp photos “sport" and “jungle gym", and the Google one made offensive mistakes, too. Both companies subsequently fiddled with their algorithms.

It’s seemingly much easier to make an artificial brain to revise “bad thinking" than to get a human being to abandon faulty ideas or prejudices. But machines can process prodigious enormous amounts of material and at speeds that are inconceivable to a human brain; and that means the machines also could develop unfortunate personality traits, convictions and ways of looking at the world faster than they could be corrected. At this point, those worries belong to the realm of science fiction. DeepDream only does one highly specific task.

Still, it points to one of the dangers of artificial intelligence: the potential for the machine to develop a mind of its own that is in profound disagreement with its human creator. BLOOMBERG

Leonid Bershidsky is a Bloomberg View columnist.

Comments are welcome at otherviews@livemint.com

Unlock a world of Benefits! From insightful newsletters to real-time stock tracking, breaking news and a personalized newsfeed – it's all here, just a click away! Login Now!

Catch all the Business News, Market News, Breaking News Events and Latest News Updates on Live Mint. Download The Mint News App to get Daily Market Updates.
More Less
Published: 28 Jul 2015, 12:31 AM IST
Next Story footLogo
Recommended For You
Switch to the Mint app for fast and personalized news - Get App