Why Artificial Intelligence isn’t intelligent

Words have power. And—ask any branding or marketing expert—names, in particular, carry weight. (iStock)
Words have power. And—ask any branding or marketing expert—names, in particular, carry weight. (iStock)

Summary

  • Some experts in AI think its name fuels confusion and hype of the sort that led to past ‘AI winters’ of disappointment

A funny thing happens among engineers and researchers who build artificial intelligence once they attain a deep level of expertise in their field. Some of them—especially those who understand what actual, biological intelligences are capable of—conclude that there’s nothing “intelligent" about AI at all.

“In a certain sense I think that artificial intelligence is a bad name for what it is we’re doing here," says Kevin Scott, chief technology officer of Microsoft. “As soon as you utter the words ‘artificial intelligence’ to an intelligent human being, they start making associations about their own intelligence, about what’s easy and hard for them, and they superimpose those expectations onto these software systems."

This might seem like a purely academic debate. Whatever we call it, surely what matters most about “AI" is the way it is already transforming what can seem like almost every industry on earth? Not to mention the potential it has to displace millions of workers in trades ranging from white to blue collar, from the back office to trucking?

And yet, across the fields it is disrupting or supposed to disrupt, AI has fallen short of many of the promises made by some of its most vocal advocates—from the disappointment of IBM’s Watson to the forever-moving target date for the arrival of fully self-driving vehicles.

Words have power. And—ask any branding or marketing expert—names, in particular, carry weight. Especially when they describe systems so complicated that, in their particulars at least, they are beyond the comprehension of most people.

Inflated expectations for AI have already led to setbacks for the field. In both the early 1970s and late 1980s, claims similar to the most hyperbolic ones made in the past decade—about how human-level AI will soon arise, for example—were made about systems that would seem primitive by today’s standards. That didn’t stop extremely smart computer scientists from making them, and the disappointing results that followed led to “AI winters" in which funding and support for the field dried up, says Melanie Mitchell, an AI researcher and professor at the Santa Fe Institute with more than a quarter-century of experience in the field.

No one is predicting another AI winter anytime soon. Globally, $37.9 billion has been invested in AI startups in 2021 so far, on pace to roughly double last year’s amount, according to data from PitchBook. And there have also been a number of exits for investors in companies that use and develop AI, with $14.4 billion in deals for companies that either went public or were acquired.

But the muddle that the term AI creates fuels a tech-industry drive to claim that every system involving the least bit of machine learning qualifies as AI, and is therefore potentially revolutionary. Calling these piles of complicated math with narrow and limited utility “intelligent" also contributes to wild claims that our “AI" will soon reach human-level intelligence. These claims can spur big rounds of investment and mislead the public and policy makers who must decide how to prepare national economies for new innovations.

Inside and outside the field, people routinely describe AI using terms we typically apply to minds. That’s probably one reason so many are confused about what the technology can actually do, says Dr. Mitchell.

Claims that AI will soon significantly exceed human abilities in multiple domains—not just in very narrow tasks—have been made by, among others, Facebook Chief Executive Mark Zuckerbergin 2015, Tesla CEO Elon Muskin 2020 and OpenAI CEO Sam Altmanin 2021.

OpenAI declined to comment or make Mr. Altman available. Tesla did not respond to a request for comment. Facebook’s vice president of AI, Jerome Pesenti, says that his company believes the field of AI is better served by more scientific and realistic goals, rather than fuzzy concepts like creating human-level or even superhuman artificial intelligence. “But," he adds, “we are making great strides toward learning more like humans do, and creating more general-purpose models that perform well on tasks beyond those they are specifically trained to do." Eventually, he believes this could lead to AI that possesses “common sense."

The tendency for CEOs and researchers alike to say that their system “understands" a given input—whether it’s gigabytes of text, images or audio—or that it can “think" about those inputs, or that it has any intention at all, are examples of what Drew McDermott, a computer scientist at Yale, once called “wishful mnemonics." That he coined this phrase in 1976 makes it no less applicable to the present day.

“I think AI is somewhat of a misnomer," says Daron Acemoglu, an economist at Massachusetts Institute of Technology whose research on AI’s economic impacts requires a precise definition of the term. What we now call AI doesn’t fulfill the early dreams of the field’s founders—either to create a system that can reason as a person does, or to create tools that can augment our abilities. “Instead, it uses massive amounts of data to turn very, very narrow tasks into prediction problems," he says.

When AI researchers say that their algorithms are good at “narrow" tasks, what they mean is that, with enough data, it’s possible to “train" their algorithms to, say, identify a cat. But unlike a human toddler, these algorithms tend not to be very adaptable. For example, if they haven’t seen cats in unusual circumstances—say, swimming—they might not be able to identify them in that context. And training an algorithm to identify cats generally doesn’t also increase its ability to identify any other kind of animal or object. Identifying dogs means more or less starting from scratch.

The vast sums of money pouring into companies that use well-established techniques for acquiring and processing large amounts of data shouldn’t be confused with the dawn of an age of “intelligent" machines that aren’t capable of doing much more than narrow tasks, over and over again, says Dr. Mitchell. This doesn’t mean that all of the companies investors are piling into are smoke and mirrors, she adds, just that many of the tasks we assign to machines don’t require that much intelligence, after all.

Mr. Scott describes AI in similarly mundane terms. Whenever computers accomplish things that are hard for humans—like being the best chess or Go player in the world—it’s easy to get the impression that we’ve “solved" intelligence, he says. But all we’ve demonstrated is that in general, things that are hard for humans are easy for computers, and vice versa.

AI algorithms, he points out, are just math. And one of math’s functions is to simplify the world so our brains can tackle its otherwise dizzying complexity. The software we call AI, he continues, is just another way to arrive at complicated mathematical functions that help us do that.

Viral Shah is CEO of Julia Computing, a cloud-software company that makes tools for programmers who build AI and related systems. His customers range from universities working on better batteries for electric vehicles to pharmaceutical companies searching for new drugs.

Dr. Shah says he loves to debate how “AI" should be described and what that means for its future abilities, but he doesn’t think it’s worth getting hung up on semantics. “This is the approach we’re taking," he says. “Let’s not talk about the philosophical questions."

For consumers, practical applications of AI include everything from recognizing your voice and face to targeting ads and filtering hate speech from social media. For engineers and scientists, the applications are, arguably, even broader—from drug discovery and treating rare diseases to creating new mathematical tools that are broadly useful in much of science and engineering. Anyplace that advanced mathematics is applied to the real world, machine learning is having an impact.

“There are realistic applications coming out of the current brand of AI and those are unlikely to disappear," says Dr. Shah. “They are just part of the scientist’s toolbox: You have test tubes, a computer and your machine learning."

Once we liberate ourselves from the mental cage of thinking of AI as akin to ourselves, we can recognize that it’s just another pile of math that can transform one kind of input into another—that is, software.

In its earliest days, in the mid-1950s, there was a friendly debate about what to call the field of AI. And while pioneering computer scientist John McCarthy proposed the winning name—artificial intelligence—another founder of the discipline suggested a more prosaic one.

“Herbert Simon said we should call it ‘complex information processing,’ " says Dr. Mitchell. “What would the world be like if it was called that instead?"

(This story has been published from a wire agency feed without modifications to the text)

Catch all the Business News, Market News, Breaking News Events and Latest News Updates on Live Mint. Download The Mint News App to get Daily Market Updates.
more

MINT SPECIALS