Tech Leaders Say AI Will Change What It Means to Have a Job
Summary
Artificial intelligence will likely lead to seismic changes to the workforce, eliminating many professions and requiring a societal rethink of how people spend their time.LAGUNA BEACH, Calif.— Artificial intelligence will likely lead to seismic changes to the workforce, eliminating many professions and requiring a societal rethink of how people spend their time, prominent tech leaders said Tuesday.
Speaking at The Wall Street Journal’s Tech Live conference on Tuesday, OpenAI CEO Sam Altman said that the changes could hit some people in the economy more seriously than others, even if society as a whole improves. This will likely be a hard sell for the most affected people, he added.
“We are really going to have to do something about this transition," Altman said. “People need to have agency, the ability to influence. We need to jointly be architects of the future."
Artificial intelligence is expected to transform the global economy by driving gains in both productivity and growth. But economists and tech entrepreneurs are divided on how quickly this shift could—and should—happen.
Earlier Tuesday, Vinod Khosla, a prominent venture capitalist whose firm was one of OpenAI’s earliest backers, laid out a stark timeline for AI’s transformation of work. Within 10 years AI will be able to “do 80% of 80% of all jobs that we know of today," said Khosla, a tech investor and entrepreneur for more than 40 years.
He pointed to many types of physicians and accountants as examples of professions that AI could largely supplant because these systems can more easily access a broad array of knowledge. Khosla likened the extent of the workforce changes to the disappearance of agricultural jobs in the U.S. in the 20th Century—a transition that took place over generations, not years.
The increased prosperity that AI will bring to societies that adopt it, however, will allow people who don’t want work to avoid it if they choose to.
“I believe the need to work in society will disappear in 25 years for those countries that adapt these technologies," he said. “I do think there’s room for universal basic income assuring a minimum standard and people will be able to work on the things they want to work on."
Altman said that ensuring a basic income won’t be enough. People will need outlets for creative expression and a chance to “add something back to the trajectory of the species," he said.
OpenAI ignited the current artificial intelligence fervor in Silicon Valley with its chatbot ChatGPT last November. The surprise success of the product has launched significant debate around the best way for governments and people to prepare for the potentially sweeping changes wrought by AI.
One point of concern: the ability to distinguish between real and AI-generated content. AI-generated images are already used to spread misinformation, infringe on intellectual property or sexualize photos of people. AI tools for detecting those types of images are still under development.
Altman said OpenAI explicitly decided to call its chatbot ChatGPT and not a person’s name so people wouldn’t confuse the tool with a person.
Chris Cox, Meta’s chief product officer, said at the Journal’s Tech Live conference that Meta decided to give chatbots personas in an effort to make them more engaging. Users want to interact with a tool that has personality, not something that feels like a robot, he said.
The company in September announced a bevy of AI chatbots based on celebrities including Naomi Osaka, Snoop Dogg and Tom Brady.
In the chatbot, Meta indicates at the start of a conversation that they are communicating with AI rather than the actual celebrity.
“Having products that experiment with what is possible is great, but having anything that doesn’t make it clear to people what is going on is a problem," Cox said.
Asked about the challenges users can have determining whether content is real or AI-made, OpenAI Chief Technology Officer Mira Murati said OpenAI is developing technology to help detect the provenance of images. That tool is “almost 99% reliable," she said but the company is still testing it and wants to design it in such a way that OpenAI’s users don’t feel monitored.
Altman said he thinks consumers could interact with generative AI on new types of devices in the future, but said he doesn’t know what an AI-centric device would be.
Write to Deepa Seetharaman at deepa.seetharaman@wsj.com and Georgia Wells at georgia.wells@wsj.com