OpenAI’s turbulent weekend
Unlock the Editor’s Digest for free
Roula Khalaf, Editor of the FT, selects her favourite stories in this weekly newsletter.
In its blazingly brief history, OpenAI has become famous for two things: astronomical technological ambition and comical corporate governance. That is not a happy combination.
The board’s abrupt sacking of Sam Altman, OpenAI’s chief executive, on Friday stunned Silicon Valley like few other events in recent decades. The poster boy for the generative artificial intelligence revolution, who had done so much to popularise OpenAI’s ChatGPT chatbot as a breakout consumer service, was ejected by four other members of the oversight board in a video call. Several other leading lights at OpenAI, including its president, Greg Brockman, quickly followed him out the door.
Much about the story — and exactly how it will end — remains mysterious. The board may well have had good reasons to dismiss Altman for being less than “consistently candid” with his fellow directors. Altman’s side hustle supporting the launch of an AI chip business certainly raised glaring concerns about possible conflicts of interest. But the board was itself less than consistently candid in explaining its decision to OpenAI’s employees, investors and Microsoft, which has heavily backed the start-up. More than 500 of OpenAI’s 770 employees on Monday signed an open letter calling for Altman to return and the board to resign, putting the company’s future into question. It has certainly vaporised its chances of raising fresh money at anything like the $86bn valuation recently touted.
But the affair raises broader issues about how AI firms are governed. If, as its evangelists trumpet, AI is so transformative, its corporate champions and guardians must display exemplary integrity, transparency and competence.
To be sure, OpenAI has always been a strange corporate creation. The research company was founded in 2015 as a not-for-profit outfit dedicated to developing AI safely for the benefit of humanity. But so vast are the costs of developing leading-edge models that it is hard for any non-commercial enterprise to remain in that game for long. So, while preserving a not-for-profit oversight board, OpenAI developed a for-profit business arm, enabling the company to attract outside investment and commercialise its services.
That hybrid structure created tensions between the two “tribes” at OpenAI, as Altman called them. The safety tribe, led by chief scientist and board member Ilya Sutskever, argued that OpenAI must stick to its founding purpose and only roll out AI carefully. The commercial tribe seemed dazzled by the possibilities unleashed by ChatGPT’s success and wanted to accelerate. The safety tribe appeared to have won out over the weekend, but perhaps not for long. The employee backlash could yet bring more twists.
What does this all mean for Microsoft? Its $13bn investment in OpenAI has clearly been jeopardised — though much of that commitment was in the form of computing resources, not yet drawn down. Yet Microsoft seemed on Monday to have triumphed by hiring Altman and several top OpenAI researchers. As Ben Thompson, author of the Stratechery newsletter, noted, Microsoft may in effect have “just acquired OpenAI for $0 and zero risk of an antitrust lawsuit”.
Baffled outsiders must hope that the AI safety institutes promised by the UK and US governments to scrutinise the leading companies’ frontier models are up and running soon. The debacle at OpenAI also amplifies the calls of those who argue that artificial general intelligence should only be pursued by scientists at an international, non-commercial research institute akin to Cern. If those who are developing such powerful technologies cannot govern themselves, then they should expect to be governed.
Get Best News and Web Services here