This AI pioneer thinks today’s AI is dumber than a cat
Summary
Yann LeCun, an NYU professor and senior researcher at Meta Platforms, says warnings about the technology’s existential peril are ‘complete B.S.’Yann LeCun helped give birth to today’s artificial-intelligence boom. But he thinks many experts are exaggerating its power and peril, and he wants people to know it.
While a chorus of prominent technologists tell us that we are close to having computers that surpass human intelligence—and may even supplant it—LeCun has aggressively carved out a place as the AI boom’s best-credentialed skeptic.
On social media, in speeches and at debates, the college professor and Meta Platforms AI guru has sparred with the boosters and Cassandras who talk up generative AI’s superhuman potential, from Elon Musk to two of LeCun’s fellow pioneers, who share with him the unofficial title of “godfather" of the field. They include Geoffrey Hinton, a friend of nearly 40 years who on Tuesday was awarded a Nobel Prize in physics, and who has warned repeatedly about AI’s existential threats.
LeCun thinks that today’s AI models, while useful, are far from rivaling the intelligence of our pets, let alone us. When I ask whether we should be afraid that AIs will soon grow so powerful that they pose a hazard to us, he quips: “You’re going to have to pardon my French, but that’s complete B.S."
In person, LeCun has a disarming charm: mischievous, quick-witted, and ready to deliver what he sees as the hard truths of his field. At age 64, he looks simultaneously chic and a bit rumpled in a way that befits a former Parisian who is now a professor at New York University. His glasses are classic black Ray Ban frames, almost identical to one of Meta’s AI-powered models. (LeCun’s own AI-powered Ray Bans stopped working after a dunk in the ocean when he was out sailing, one of his passions.)
Sitting in a conference room inside one of Meta’s satellite offices in New York City, he exudes warmth and genial self-possession, and delivers his barbed opinions with the kind of grin that makes you feel as if you are in on the joke.
His body of work, and his perch atop one of the most accomplished AI research labs at one of the biggest tech companies, gives weight to LeCun’s critiques.
Born and raised just north of Paris, he became intrigued by AI in part because of HAL 9000, the rogue AI in Stanley Kubrick’s 1968 sci-fi classic “2001: A Space Odyssey." After earning a doctorate from the Sorbonne, he worked at the storied Bell Labs, where everything from transistors to lasers were invented. He joined NYU as a professor of computer science in 2003 and became director of AI research at what was then Facebook a decade later.
In 2019, LeCun won the A.M. Turing Award, the highest prize in computer science, along with Hinton and Yoshua Bengio. The award, which led to the trio being dubbed AI godfathers, honored them for work foundational to neural networks, the multilayered systems that underlie many of today’s most powerful AI systems, from OpenAI’s chatbots to self-driving cars.
Today, LeCun continues to produce papers at NYU along with his Ph.D. students, while at Meta he oversees one of the best-funded AI research organizations in the world, as chief AI scientist at Meta. He meets and chats often over WhatsApp with Chief Executive Mark Zuckerberg, who is positioning Meta as the AI boom’s big disruptive force against other tech heavyweights from Apple to OpenAI.
Debating friends
LeCun jousts with rivals and friends alike. He got into a nasty argument with Musk on X this spring over the nature of scientific research, after the billionaire posted in promotion of his own artificial-intelligence firm.
LeCun also has publicly disagreed with Hinton and Bengio over their repeated warnings that AI is a danger to humanity.
Bengio says he agrees with LeCun on many topics, but they diverge over whether companies can be trusted with making sure that future superhuman AIs aren’t either used maliciously by humans, or develop malicious intent of their own.
“I hope he is right, but I don’t think we should leave it to the competition between companies and the profit motive alone to protect the public and democracy," says Bengio. “That is why I think we need governments involved."
LeCun thinks AI is a powerful tool. Throughout our interview, he cites many examples of how AI has become enormously important at Meta, and has driven its scale and revenue to the point that it’s now valued at around $1.5 trillion. AI is integral to everything from real-time translation to content moderation at Meta, which in addition to its Fundamental AI Research team, known as FAIR, has a product-focused AI group called GenAI that is pursuing ever-better versions of its large language models.
“The impact on Meta has been really enormous," he says.
At the same time, he is convinced that today’s AIs aren’t, in any meaningful sense, intelligent—and that many others in the field, especially at AI startups, are ready to extrapolate its recent development in ways that he finds ridiculous.
If LeCun’s views are right, it spells trouble for some of today’s hottest startups, not to mention the tech giants pouring tens of billions of dollars into AI. Many of them are banking on the idea that today’s large language model-based AIs, like those from OpenAI, are on the near-term path to creating so-called “artificial general intelligence," or AGI, that broadly exceeds human-level intelligence.
OpenAI’s Sam Altman last month said we could have AGI within “a few thousand days." Elon Musk has said it could happen by 2026.
LeCun says such talk is likely premature. When a departing OpenAI researcher in May talked up the need to learn how to control ultra-intelligent AI, LeCun pounced. “It seems to me that before ‘urgently figuring out how to control AI systems much smarter than us’ we need to have the beginning of a hint of a design for a system smarter than a house cat," he replied on X.
He likes the cat metaphor. Felines, after all, have a mental model of the physical world, persistent memory, some reasoning ability and a capacity for planning, he says. None of these qualities are present in today’s “frontier" AIs, including those made by Meta itself.
Léon Bottou, who has known LeCun since 1986, says LeCun is “stubborn in a good way"—that is, willing to listen to others’ views, but single-minded in his pursuit of what he believes is the right approach to building artificial intelligence.
Alexander Rives, a former Ph.D. student of LeCun’s who has since founded an AI startup, says his provocations are well thought out. “He has a history of really being able to see gaps in how the field is thinking about a problem, and pointing that out," Rives says.
AI on your face
LeCun thinks real artificial general intelligence is a worthy goal—one that Meta, too, is working on.
“In the future, when people will talk to their AI system, to their smart glasses or whatever else, we need those AI systems to basically have human-level characteristics, and really have common sense, and really behave like a human assistant," he says.
But creating an AI this capable could easily take decades, he says—and today’s dominant approach won’t get us there.
The generative-AI boom has been powered by large language models and similar systems that train on oceans of data to mimic human expression. As each generation of models has become much more powerful, some experts have concluded that simply pouring more chips and data into developing future AIs will make them ever more capable, ultimately matching or exceeding human intelligence. This is the logic behind much of the massive investment in building ever-greater pools of specialized chips to train AIs.
LeCun thinks that the problem with today’s AI systems is how they are designed, not their scale. No matter how many GPUs tech giants cram into data centers around the world, he says, today’s AIs aren’t going to get us artificial general intelligence.
His bet is that research on AIs that work in a fundamentally different way will set us on a path to human-level intelligence. These hypothetical future AIs could take many forms, but work being done at FAIR to digest video from the real world is among the projects that currently excite LeCun. The idea is to create models that learn in a way that’s analogous to how a baby animal does, by building a world model from the visual information it takes in.
The large language models, or LLMs, used for ChatGPT and other bots might someday have only a small role in systems with common sense and humanlike abilities, built using an array of other techniques and algorithms.
Today’s models are really just predicting the next word in a text, he says. But they’re so good at this that they fool us. And because of their enormous memory capacity, they can seem to be reasoning, when in fact they’re merely regurgitating information they’ve already been trained on.
“We are used to the idea that people or entities that can express themselves, or manipulate language, are smart—but that’s not true," says LeCun. “You can manipulate language and not be smart, and that’s basically what LLMs are demonstrating."
For more WSJ Technology analysis, reviews, advice and headlines, sign up for our weekly newsletter.
Write to Christopher Mims at christopher.mims@wsj.com