The recent OpenAI debacle showed us what humanity must stay wary of | Mint

The recent OpenAI debacle showed us what humanity must stay wary of

OpenAI’s old board claimed to be acting on behalf of ‘humanity,’ and was said to be worried about AGI and safety.  (AP Photo/Barbara Ortutay, File)
OpenAI’s old board claimed to be acting on behalf of ‘humanity,’ and was said to be worried about AGI and safety. (AP Photo/Barbara Ortutay, File)

Summary

  • The problem is usually never the technology but the business model. And it seems capitalist business models have an edge over societal aims in a field full of profound perils. OpenAI began as a guardian of humanity, twisted itself into a pretzel and then saw the profit motive prevail.

Maureen Dowd of The New York Times writes of a Twilight Zone episode (nyti.ms/413XeOe) in which aliens who have recently landed on earth give world leaders a book to signal their intention for peace and cooperation. Earth’s experts work hard to translate its alien language and are relieved when they interpret the book title as ‘To Serve Man.’ They welcome the aliens, but soon realize that the book is actually a cookbook!

The OpenAI board must have been watching this episode when it suddenly and inexplicably decided to dismiss the company’s charismatic founder-CEO Sam Altman in a half-hour of frenzied activity. The reverberations felt around the planet were like an alien landing’s, as the tech world and journalists tried to make sense out of this unexpected ouster. The board gave a cryptic reason that Altman was not “consistently candid in his communications with the board," but clammed up afterwards. Various theories started floating in this vacuum. There were whispers about Altman “lying to the board." Observers felt that he would have done something grossly inappropriate, but that was promptly refused by the board and OpenAI’s chief scientist. Another outrageous theory was that Sam and team had actually discovered AGI (Artificial General Intelligence), and the board was shaken by the implications, leading to Altman’s instant dismissal; a new development called Q Star (sounding dangerously like QAnon) was named as that AGI! There were also some dark mumblings of household trouble in the Altman family, and even conspiracy theories that Microsoft was behind the action in a plot to gain more control.

In fact, no one was more shocked than Microsoft and its charismatic CEO Satya Nadella. He and his company had gone all-in with Altman and OpenAI to develop the next generation of its software productivity products called Copilots. The real reason for the abrupt decision was revealed soon, though: internal boardroom intrigue. One of its members, Helen Toner, had co-written a paper that seemed to pan OpenAI for “stoking the flames of AI hype." Altman was miffed, and, besides remonstrating with Toner, allegedly started rallying other board members to remove her. However, four of the board’s members (excluding him and another co-founder Greg Brockman) joined ranks and decided to remove Altman instead. A surprising member of this quad was Ilya Sutskever, another co-founder and a friend of Altman. Reports suggest that Sutskever, who was a purist as far as AI research and development goes, was worried about the pace and direction of OpenAI’s product launches and had privately and publicly warned against it.

The rest is well known: Altman was back as CEO in five days, not least due to a masterstroke played by Microsoft to hire him and extend an open offer to the rest of OpenAI’s 700 employees, with 95% of them threatening to accept it. What is more significant is how this will likely play out going forward. The story here is about a much more existentialist question on how this powerful technology ought to be developed.

OpenAI’s old board claimed to be acting on behalf of ‘humanity,’ and was said to be worried about AGI and safety. Ranged against it were the powerful commercial instincts of value-creation and profits. OpenAI was deliberately structured unconventionally, with its board empowered to act if its members felt that humanity was in danger. But their gambit failed.

In many ways, this is a familiar story and I have often written about this. The problem is usually never the technology but the business model. Data-monetization as a business model has sunk social networks into the morass they are in, with X’s owner Elon Musk railing against advertisers who seem to be “blackmailing" him and his noble intentions for X. Google succumbed to ‘the innovator’s dilemma’ as it was afraid that the large language models of AI it had worked on could hurt its lucrative search business model. So will it be for Generative AI and OpenAI. The company twisted itself into a pretzel-like corporate structure, enabling it to raise billions of dollars from Microsoft and attain a valuation placed at $86 billion, while simultaneously claiming to build AGI for the good of humanity and not necessarily for economic gain. These divergent forces could not be sustained forever and the pretzel model broke down when the strain grew too severe. The outcome was clear: capitalistic business models had won over the quest for pure societal good. The aliens are already among us, it would seem, and humanity should be worried.

Catch all the Business News, Market News, Breaking News Events and Latest News Updates on Live Mint. Download The Mint News App to get Daily Market Updates.
more

MINT SPECIALS

Switch to the Mint app for fast and personalized news - Get App