Technology executives who appear before Senate committees tend to be verbally whipped to the point of abject humiliation. Not Sam Altman. On Tuesday, senators initially went after the OpenAI CEO with questions on the terrors of artificial intelligence, from manipulating citizens to invading their privacy. To their surprise, Altman agreed with everything they said, and more. “Yes, we should be concerned about that,” he said gravely, when Senator Josh Hawley asked about how AI models could “supercharge the war for attention” online. After Senator Dick Durbin complained that Congress had been too slow to act on social media, and didn’t want to make the same mistake with AI, Altman said, “I’d love to collaborate with you.” Durbin bristled again at those past experiences with Mark Zuckerberg. “The response from social media was, ‘Get out of the way’,” he grumbled. “I’m not happy with online platforms.” “Me either,” Altman replied.
Altman’s first testimony before Congress was a master class in wooing policymakers. The 38-year-old was articulate. He didn’t use any techno jargon and agreed with all the pressing concerns lawmakers. At one point, he declined an offer to be America’s top AI regulator. “I love my current job,” he said. When you get past how strange it is for senators to praise someone who says their own technology could cause tremendous harm, you come to the question of what to do about all this potential damage, which lawmakers asked Altman for advice on. He said the US should set up an agency, perhaps a global one, that would grant licenses for powerful AI systems. Gary Marcus, a computer scientist who also testified with Altman, said that agency could be like the Food & Drug Administration. “I thought they were incredibly open to the idea,” he told me a few hours after the hearing.
I am sceptical about quick action. Firstly, the US has a dismal record on passing laws to create new tech regulators, as Altman and Marcus suggest. Despite all the bipartisan agreement over social media harms and multiple bills over the years that have proposed new regulators to protect citizens’ data, there is still no federal privacy law doing any of that. Secondly, Altman’s actions speak louder than his words. He founded OpenAI in 2015 as an NGO with a mission statement to advance AI for the benefit of humanity, unconstrained by financial obligations. Six years later, it had morphed into a for-profit company with a close partnership with Microsoft. This pivot shows Altman may be willing to do whatever it takes to achieve his endgame: super-intelligent computers that can surpass human capabilities, which might also mean withdrawing his support for regulations that compromise that goal.
For all the self-flagellation that senators did over their failures with social media, the US still suffers from chronic inertia in regulating tech. Altman, in other words, could tell lawmakers everything they wanted to hear because history shows they probably won’t pass any serious reform anyway. If Congress is serious about tackling some of the ‘nightmare scenarios’ of a world where AI runs rampant, lawmakers should look at other policy ideas put on the table years ago. The EU, for example, proposed its 107-page AI Act in 2021, setting out rules for firms over things like disclosure of copyrighted material and auditing algorithms that could infringe people’s human rights. The EU’s parliament will vote on it in mid-June.
US senators asked Altman about nutrition labels and scorecards for AI. Altman called that “a great idea” although he didn’t have a framework to propose. But Margaret Mitchell and other researchers have been advocating model cards and processes to check AI processes for years. “There is stuff around evaluation that I imagine is not going to be entirely supported by Sam Altman,” Mitchell says. Real regulation would mean scrutinizing the data used to train AI models. That way, regulators could ensure that a tool like ChatGPT performs equally across different demographics such as gender and race. That would also force OpenAI to disclose its training data—something it has refused to do so far.
Mitchell’s work on evaluating AI systems wasn’t cited on Tuesday. And the EU’s AI Act got only a passing mention from US senators—and not as a framework to emulate, but a system to beat: “Europe is ahead of us,” Senator Richard Blumenthal said. “We need to be in the lead.”
Such hubris helps explain why the US ultimately lags the EU on technology regulation. When lawmakers ignore groundwork laid elsewhere, preferring instead to grandstand about pioneering new policies in alliance with powerful technologists like OpenAI’s CEO, they succeed in generating plenty of hype about their coming alleged accomplishments. But they risk achieving nothing.
Parmy Olson is a Bloomberg Opinion columnist covering technology.
Catch all the Business News, Market News, Breaking News Events and Latest News Updates on Live Mint. Download The Mint News App to get Daily Market Updates.
MoreLess