The Man Behind the A.I. Revolution Just Got Fired. It’s a Scandal. Here’s What We Know.

  • Oops!
    Something went wrong.
    Please try again later.
  • Oops!
    Something went wrong.
    Please try again later.

This is a breaking news post that has been updated with the latest information.

Late Friday afternoon, the famed artificial intelligence company OpenAI made a shocking announcement: Effective immediately, its board of directors was firing CEO Sam Altman, both from his leadership position and from the OpenAI board. CTO Mira Murati will assume Altman’s duties as the company oversees a “leadership transition” and seeks a “permanent successor” for the top job. In addition, OpenAI president and co-founder Greg Brockman is “stepping down as chairman of the board”—although it was originally implied that Brockman would remain at OpenAI in a corporate role, he later shared that “based on today’s news, i quit.” In a tweeted statement, Altman wrote: “i loved my time at openai. it was transformative for me personally, and hopefully the world a little bit. most of all i loved working with such talented people. will have more to say about what’s next later.”

On Friday night, Brockman and Altman released a joint statement from the former’s X/Twitter account, claiming they were “shocked and saddened by what the board did” and were “still trying to figure out exactly what happened.” Per their account, Altman was asked Thursday night by OpenAI chief scientist and board member Ilya Sutskever to attend a Google Meet on Friday. When Altman joined the meeting, he was informed by Sutskever and the entire board—except for Brockman—that he was being fired. Shortly after, Sutskever called Brockman into a meeting to inform him that Altman was sacked and Brockman would be removed from the board, though he “was vital to the company and would retain his role.” Both ex-OpenAI execs also claimed that Murati was the only other OpenAI leader who’d had any awareness of the planned overhaul before Friday. Altman later followed up with another tweet, calling the fiasco “a weird experience in many ways.”

It’s a heckuva pre-weekend news dump from what is perhaps the best-known and most influential A.I. company of the decade, if not the century. After all, OpenAI’s late-2022 releases of its generative-text bot, ChatGPT, and its automated imagemaker, DALL-E 2, did more than anything else to mainstream and advance the much-hyped artificial intelligence revolution. ChatGPT was, for a time, the fastest-growing app in history, and its widespread use goaded Big Tech giants like Facebook and Google to put out their own A.I. tools as quickly as possible, fueling an arms race that’s already disrupted countless industries from digital media to show business. Altman became the singular face of these advancements, explaining his company and its magical creations to magazine reporters, anxious members of Congress, and wide-eyed world leaders. (It was his now-almost-year-old tweet that launched ChatGPT into the stratosphere.) Plus, OpenAI had just held its first developers conference last week, with Altman giving the keynote address.

What happened to Sam Altman, and where does OpenAI go from here? Why does it matter? And was Altman’s job itself displaced by A.I.???? (Probably not.) Here’s what we know now, and why the tech world is freaking out about this.

Who’s this guy, again?

Sam Altman’s origin story is a classic Silicon Valley fable: In 2005, at the age of 19, he dropped out of Stanford University to co-found a location-sharing social network, Loopt, that earned investments and attention from big venture capital firms including Sequoia Capital, New Enterprise Associates, and a then-upstart Y Combinator. Loopt never got close to reaching the sensational scale of Facebook and Twitter, but it did establish Altman on the scene, and he went on to start his own venture capital firm, Hydrazine Capital. Altman later leveraged this experience to become president of Y Combinator, overseeing and expanding the fund’s vast portfolio of high-profile companies—including Reddit, which he helped usher through a rocky transitional period in 2015. That same year, flanked by a powerful crew of tech figures—Facebook’s Peter Thiel, LinkedIn’s Reid Hoffman, and Stripe’s Greg Brockman, among others—Altman announced the formation of OpenAI and poured reams of money into the nonprofit, becoming co-chair alongside Elon Musk, who’d often been a voice of caution when it came to A.I.

Wait, OpenAI’s been around that long? And it was a nonprofit?

Yes! In a blog post from Dec. 11, 2015, co-written with Ilya Sutskever, Brockman noted that the new firm’s goal was “to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return.” That, as we now know, didn’t last long: In 2018, Musk professed his worry that OpenAI was falling behind Google when it came to keeping up with the rapid pace of A.I. developments, and expressed his desire to take over the whole thing. Altman and the OpenAI board rejected this, so Musk resigned from the firm and yanked his planned investments. Faced with a financial shortfall, Altman himself proceeded to take over OpenAI and transform it into a for-profit entity, allowing it to earn the types of financial investment it couldn’t as a mere nonprofit. Microsoft soon came knocking with $1 billion and bunch of high-tech resources, allowing OpenAI to catch up with competitors and, eventually, burst into the mainstream with the awe-inspiring powers of ChatGPT.

And Sam had been top dog ever since?

Indeed. By 2020, Altman had fully stepped away from Y Combinator to focus on OpenAI and a few other weird side projects, like the eye-scanning cryptocurrency token WorldCoin. (According to CoinDesk, WorldCoin’s value plunged by 12 percent after OpenAI’s Friday announcement.)

Artificial intelligence remained at the center of Altman’s attention, though—as well as the subject of his most dire fears. Altman, a doomsday prepper, was influenced by tech-world thinkers who raised alarm over the prospect of an all-powerful A.I. that could learn to think for itself and, resultingly, trample over human civilization as we know it. Altman and other like-minded folks did not trust companies like Google to be responsible with their creations, and they wanted OpenAI to be a model of powerful A.I. that was aptly guardrailed and regulated to prevent human obsolescence.

… huh.

Yeah, it’s a whole thing, a guiding philosophy that spurred Altman and his colleagues not only to craft advanced tools like ChatGPT, but also warn of what these things could potentially do down the line. You may remember that it was a motif in his testimony to Congress earlier this year, and a view that’s been adopted by a lot of people in government.

So, with Altman gone, does that mean OpenAI is shifting its perspectives? Or its mode of business? Or … ?

We don’t know for sure just yet. The nut of the OpenAI board statement reads: “Mr. Altman’s departure follows a deliberative review process by the board, which concluded that he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities. The board no longer has confidence in his ability to continue leading OpenAI.” There haven’t been any specifics mentioned either by OpenAI execs or Altman, but one can fairly hazard to say there’s something bad lurking in the background here. The board further wrote that “OpenAI was deliberately structured to advance our mission: to ensure that artificial general intelligence benefits all humanity. The board remains fully committed to serving this mission. We are grateful for Sam’s many contributions to the founding and growth of OpenAI. At the same time, we believe new leadership is necessary as we move forward.” (And no, we don’t know why Brockman was dismissed from his position as board chair, either.)

How bad is bad?

It’s hard to say at the moment, but if Brockman and Altman’s accounting of Friday’s events holds true, it would appear that Ilya Sutskever spearheaded their removal, potentially indicating some ideological disagreements at work here. An anonymous source reportedly told Bloomberg that Altman had been clashing with OpenAI’s board over “differences of opinion on AI safety, the speed of development of the technology, and the commercialization of the company.” It may have also come down to office politics: About a month ago, “Sutskever’s responsibilities at the company were reduced,” thanks to an oppositional alliance between Altman and Brockman. However, “Sutskever later appealed to the board, winning over some members, including Helen Toner, the director of strategy at Georgetown’s Center for Security and Emerging Technology.” Notably, both Toner and her employer are tied to the “effective altruism” movement, which considers superadvanced A.I. an urgent threat to the well-being of humanity. Altman had been influenced by thinkers in this space, but he was never as closely tied to the movement as Toner was, and their conflict may be indicative of deeper rifts between the EA community and other A.I. tinkerers. It wouldn’t be the first such instance: A few OpenAI senior employees left the company in 2019 in order to start their own A.I. company, Anthropic—which was funded and staffed in large part by famous EAs like Sam Bankman-Fried. The EA philosopher Eliezer Yudkowsky has likewise stated he is “8.5% more cheerful about OpenAI going forward” with Mira Murati in charge.

Prior to Brockman and Altman’s joint statement, tech folks online had reinvited speculation around some uglier allegations. For a lengthy September profile of Sam, New York magazine’s Elizabeth Weil spoke in depth with his younger sister, Annie Altman, who explained how she fell out with her family more broadly, citing Sam’s emotional callousness and Silicon Valley ambitions. The profile also brought attention to a series of years-old tweets from Annie in which she alleged “experience[ing] sexual, physical, emotional, verbal, financial, and technological abuse from my biological siblings, mostly Sam Altman.” To be clear, these remain allegations, and were not mentioned by anyone (even Annie) as a cause of the firings.

Wow. So … what will OpenAI’s “new leadership” look like?

Until the board appoints a permanent replacement for Altman, Murati, the CTO, will take the helm. According to a Fast Company profile from March, which deemed Murati “one of tech’s most influential innovators,” Murati assumed her executive role at OpenAI in May of last year, and has reportedly been in charge “of OpenAI’s strategy to test its tools in public.” This suggests it was her idea to release ChatGPT in November of last year, at least in part, and to allow both human reception of—and experimentation with—the chatbot to take place so publicly. She previously worked at Tesla, helping develop the Model X and its (controversial) Autopilot self-driver program, and then worked with augmented reality for a few years before joining OpenAI in 2018. Even though Musk himself left the nonprofit that year, it’s reasonable to assume Murati has retained plenty of Muskian ideas about A.I. from her time at Tesla. A notable quote of hers from the Fast Company piece: “You could make technological progress in a vacuum without real-world contacts. But then the question is, are you actually moving in the right direction?”

For her first move as chief executive, Murati told OpenAI staff on Friday that the company’s relationship with Microsoft was “stable,” according to reporting from the Information. That’s gotta be a relief to investors.