/

Sam Altman’s Steve Jobs Moment

11 mins read
The senior executives of OpenAI in March. From left, Mira Murati, CTO and briefly interim CEO; Sam Altman, the ousted chief executive; Greg Brockman, OpenAI’s president who quit on Friday (Nov. 17) in allegiance with Sam; and Ilya Sutskever, chief scientist and board member who led the coup. Credit: Jim Wilson/The New York Times

If you come for the king, you best make sure you know what you are doing.

The seemingly “ever-developing” drama has finally come to a close – at least for now. OpenAI, the nonprofit behind ChatGPT, avoided a catastrophe and reinstated Sam Altman as CEO, five days after a board-led coup ousted him. 

Sam Altman, the co-founder of OpenAI, was fired as CEO on Friday, November 17th, 2023, in a sudden exit that stunned the technology industry. The board of directors concluded that he was “not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities,” and claimed it no longer had confidence in Altman’s ability to continue leading OpenAI. Accordingly, they fired him and appointed the chief technology officer, Mira Murati, as OpenAI’s interim CEO. 

So how did this dramatic turn of events happen? According to various sources, on Friday, Altman was asked to join an impromptu Google Meet by Chief Scientist and board member, Ilya Sutskever, where he was told he was being fired immediately and that the news was going out soon. Shortly after, Greg Brockman, co-founder and President of OpenAI, was told that he was being removed from the board, but would retain his position as President as he was vital to the company. Brockman subsequently quit in protest, as did other key members, such as OpenAI’s director of research, Jakub Pachocki. Brockman claimed in a post on X, formerly Twitter, that even though he was the chairman of the board, he was not part of the board meeting where Mr. Altman was ousted.

Key investors such as Microsoft, which invested $13 billion and has a 49% stake, were only notified a minute before the public announcement. Following these actions, the board consisted of Ilya Sustkever, Adam D’Angelo, chief executive of Quora, Tasha McCauley, an adjunct senior management scientist at the RAND Corporation, and Helen Toner, director of strategy and foundational research grants at Georgetown University’s Center for Security and Emerging Technology. 

I had the chance to speak with Fieldston Computer Science teacher Kurt Vega about his thoughts regarding the situation. Vega said that the situation is “very strange and unprecedented. I really can’t think of a similar situation in modern corporate governance … This is strange because there is something that is hidden that convinced the four members of the former board to fire Sam. No matter what the reasons were, the way it was done seems ill-considered and naive.”

To this day, there has been no official explanation of the board’s true concerns or motivation behind their actions. Employees at the company were told that Altman’s removal had nothing to do with “malfeasance or anything related to our financial, business, safety or security/privacy practice,” according to a report by The New York Times. 

Instead, the dominant narrative is a philosophical rift regarding the acceleration of AI development most likely driving the board’s actions. Both board members, McCauley and Toner, have ties to the Rationalist and Effective Altruist movements, a community that is deeply concerned that AI could one day destroy humanity. Toner published a paper criticizing the release of ChatGPT and GPT-4 while praising Anthropic, a competitor AI company founded by ex-OpenAI employees that emphasizes the safe deployment of AI. According to reports, Sustkever had a growing concern for safety following recent breakthroughs and felt that Altman was pushing too fast and was not transparent about its development. In short, the board members believed it was imperative to decelerate, and Altman was not the right person to do so. 

A day after Sam’s outing, reports surfaced that the board was in negotiations to bring Altman back to the company. However, these negotiations fell through as it was announced on Monday, Nov. 20, morning that Altman and Brockman would join Microsoft and run their own advanced research and development team. Microsoft also offered jobs to all top OpenAI employees; thus, it seemed a mass exodus was imminent. 

OpenAI hired a new interim CEO, Emmett Shear, the former CEO of Twitch, who openly embraces “A.I. doomerism,” the fear that artificial intelligence could cause the end of humanity. This did not help at all to calm other investors as it further affirmed how OpenAI planned to shift away from a profit focus. 

Later in the day on Monday, Ilya Sutskever, who led the coup, said on X, that he “deeply regret[s] [his] participation in the board’s actions” and more than 700 of OpenAI’s 770 employees signed a letter threatening to quit unless the board resigned and Altman returned. The employees stated they would follow Altman and Brockman to Microsoft if their demands were not met. 

This prompted a new set of negotiations. Finally, OpenAI agreed for Sam Altman to return to OpenAI as CEO and a new initial board with Bret Taylor, a former co-CEO of Salesforce Inc., as chair. Other board members include the former US Treasury Secretary, Larry Summers, and existing member Adam D’Angelo. Three of the four OpenAI board members who instigated the coup were gone: Ilya Sutskever (who is still at the company, just not on the board), Helen Toner, and Tasha McCauley. According to Bloomberg, to reach an agreement, Altman conceded not to join the initial board, though there is an expectation that he will eventually become a director. 

The recent series of events may leave OpenAI and Altman in an even stronger position than they were before. The company is poised to have a revamped board of individuals with greater expertise, aligning with its current status as a leading force in technology. Altman’s authority and concentration will be reinforced, contributing to an increased sense of unity among OpenAI employees. 

However, it remains to be seen how this will affect the company’s mission to ensure that artificial general intelligence benefits all humanity. Ethical questions remain about whether people should slow down research into AI and focus on safety and alignment. According to Vega, “the main players should prioritize safety over acceleration. This technology is so foundational that getting it wrong will have very, very bad consequences.” Yet others argue that slowing innovation too much could also be detrimental, costing lives that AI solutions could save and allowing authoritarian regimes to dominate the field. 

Concerning changes in computer science education, Mr. Vega believes, “The focus should be shifted away from coding towards a greater emphasis on ethics and the practical use of technology.” This sentiment captures an important perspective emerging from these events. While innovation remains crucial, pursuing technological progress responsibly and for the benefit of humanity should be the guiding priority. 

Sam Altman’s abrupt exit from OpenAI echoes the unpredictability and dramatic twists that characterized the tech industry during Steve Jobs’ leadership at Apple. Altman, much like Jobs, played a pivotal role in transforming OpenAI into a powerhouse, but his departure raised questions about leadership dynamics and the need for transparency within innovative companies. Unlike Jobs, who took 12 years to return, Altman is back after only five days. If Altman is indeed following a career trajectory similar to Jobs, we have arrived at a fascinating turning point in his narrative. When Jobs came back to Apple after a coup, he introduced the iPod, iMac, and iPhone, all of which have revolutionized our society. But, the similarities between OpenAI and Apple stop right there. There is a fundamental difference in the governance structure between these two companies: Apple, like many other tech companies, is a for-profit corporation whose objective is to increase shareholder value (i.e., profit-making) whereas OpenAI is a non-profit foundation whose objective is to serve the best interests of humanity. While the job of the board of a for-profit company is to look out for the interests of the investors in the company, the board of a non-profit is to ensure the company’s operations are serving the best interests of humanity. Ultimately, finding the right balance between seeking profit and accelerating AI capabilities with thoughtful consideration of safety, transparency, and alignment with human values will come down to the responsible leadership Altman pledges.

Leave a Reply

Your email address will not be published.

Latest from Blog