The Shocking Drama at OpenAI Isn’t As Stupid As It Looks

  • Oops!
    Something went wrong.
    Please try again later.
  • Oops!
    Something went wrong.
    Please try again later.

The confounding saga of Sam Altman’s sudden, shocking expulsion from OpenAI on Friday, followed by last-ditch attempts from investors and loyalists to reinstate him over the weekend, appears to have ended right where it started: with Altman and former OpenAI co-founder/president/board member Greg Brockman out for good. But there’s a twist: Microsoft, which has been OpenAI’s cash-and-infrastructure backer for years, announced early Monday morning that it was hiring Altman and Brockman “​​to lead a new advanced AI research team.” In a follow-up tweet, Microsoft CEO Satya Nadella declared that Altman would become chief executive of this team, which would take the shape of an “independent” entity within Microsoft, operating something like company subsidiaries GitHub and LinkedIn. Notably, per Brockman, this new entity will be led by himself, Altman, and the first three employees who’d quit OpenAI Friday night in protest of how those two had been treated.

What happens to OpenAI, until now the leader in the white-hot generative A.I. space thanks to ChatGPT? The company confirmed late Sunday night that Altman would be replaced by a new interim CEO, Twitch co-founder Emmett Shear. But it’s difficult to say what will occur. Many of the company’s staffers appear to have quit in solidarity with Altman, while others are tweeting that “OpenAI is nothing without its people,” earning approving heart emojis from Altman. Kara Swisher shared a letter signed by 505 OpenAI employees requesting that OpenAI’s board members resign—or face the wrath of hundreds of Altman loyalists defecting to the new Microsoft venture. Nadella, meanwhile, wrote that “we remain committed to our partnership with OpenAI” and to “working with” OpenAI’s post-Altman leadership. Shear, for his part, tweeted, “​​Our partnership with Microsoft remains strong.”

Shear certainly has his work cut out for him. The new OpenAI will have to 1) keep a lid on more potential mass defections from Altman loyalists; 2) make peace with the stunned investors, like Sequoia Capital and Tiger Global Management, that worked relentlessly all weekend to reinstall Altman and purge the OpenAI board; and 3) earn back the trust of a Silicon Valley confounded and upset by how the most influential artificial intelligence firm of its era devolved into such bitter chaos. The confusion won’t stem anytime soon: Ilya Sutskever, the OpenAI chief scientist and board member who reportedly spearheaded the effort last week to remove Altman, tweeted Monday morning, “I deeply regret my participation in the board’s actions. I never intended to harm OpenAI. I love everything we’ve built together and I will do everything I can to reunite the company.” Even more weirdly, he was one of the 505 OpenAI employees to sign the aforementioned letter—you know, the letter telling board members like himself that they gots to go. (Altman, for his part, has approved of Sutskever’s message.)

The still-ongoing corporate drama is undoubtedly a turning point for A.I. One need only peer at the storm that raged over Altman’s ouster throughout the weekend to see where all this may eventually head. The coup has turned up the heat in a simmering war between two ideological camps invested in the future of the technology. On one side, there are voices—including those on OpenAI’s board—urging caution on rapid artificial intelligence development, like the A.I. doomers, the effective altruists, the longtermists, the rationalists, and pretty much anyone else left jittery by the prospect of self-conscious, self-propelling A.I. (Nevertheless, members of some of these groups are still trying to make a buck on the technology, as evidenced by OpenAI’s very existence.) On the other, the folks who believe that A.I. progress cannot, should not be limited in any form whatsoever—not by government regulation, not by A.I. engineers, and especially not by quisling board members worried their A.I. creations will go haywire. These warriors, who generally and often explicitly follow a newish ideology of “effective accelerationism”—e/acc for short—somewhat confusingly view Altman as their patron saint, at least in the context of OpenAI’s putsch.

What the heck just happened and what will happen next? How does Atlman’s firing signal war? Wasn’t Altman himself warning of A.I. doom and begging for government oversight? Yes, there are still many questions to ask. Let’s run through them.

So, wait, Sam Altman was gonna go back to OpenAI but didn’t?

To briefly recap: On Friday, OpenAI’s board of directors announced, out of nowhere, that Sam Altman would be booted as CEO and board member, following its conclusion that Altman “was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities.” CTO Mira Murati would step up as interim CEO because “the board no longer has confidence in [Altman’s] ability to continue leading OpenAI.” The release also mentioned that company co-founder and chairman Greg Brockman would step away from the board but retain his executive position.

However, things went awry when Brockman tweeted that he was quitting OpenAI altogether. A flood of updates soon demolished the Friday-evening news dam. We quickly learned, according to various reports, that 1) Altman’s firing appeared to have been orchestrated by Sutskever, thanks to long-standing differences over the company’s business model and approach to its potentially dangerous products; and 2) Brockman and Altman had little understanding of why, exactly, they had been fired but were nevertheless shifting their immediate focus to future projects—i.e., a new startup. (Shear’s tweet has maybe poured some water on the supposition that Sutskever booted Altman over concerns regarding A.I. dangers: “The board did *not* remove Sam over any specific disagreement on safety, their reasoning was completely different from that. I’m not crazy enough to take this job without board support for commercializing our awesome models.” We don’t have much clarity beyond that.)

Further, because the board had made this decision without giving sufficient notice to Microsoft and OpenAI’s other investors, the company’s backers were pissed, immediately kick-starting an effort to both reinstall Altman and Brockman—and also, per the former’s request, purge the mutinous board members. Minute-by-minute reporting from outlets like Bloomberg and the Information indicated that the board was open to resigning so that Altman could return, although it appeared to balk at some of the conditions. (Ironic, then, that its hand-picked replacement, Shear, said that he “will drive changes in the organization—up to and including pushing strongly for significant governance changes if necessary.”) Anyway, it all ended Sunday night with Sutskever confirming to staffers, to their clear chagrin, that Altman would not return.

Yowza. What about this whole war thing, though?

Yeah, that. First, it might be helpful to get back to the core conflict here. As was made apparent by Friday night, Altman’s dismissal had reportedly come as a result of clashes between himself and his board members, most of whom were not named Greg Brockman and have been disparagingly labeled as “A.I. doomers.” Some short bios:

Ilya Sutskever: A longtime A.I. researcher who’d previously studied under and collaborated with pioneering neural-network scientist (and now A.I. doomer) Geoffrey Hinton, Sutskever briefly worked in Google’s A.I. unit before Elon Musk persuaded him to help Musk, Altman, and other tech titans found and launch OpenAI in 2015. (This “recruiting battle” was apparently what ended Musk’s friendship with Google’s Larry Page.) Sutskever had recently begun to lash out at Altman, taking umbrage with the CEO’s expansionary visions for OpenAI, according to Bloomberg. In July, Sutskever co-founded an internal unit dedicated to keeping “superintelligence”—i.e., when machines surpass humans—in check. But his job duties were reportedly marginalized at the company, and his complaints to that effect supposedly found a sympathetic ear in …

Helen Toner: An academic with deep ties to effective altruism. How deep? Well, she’s currently director of strategy and foundational research grants at Georgetown University’s EA-aligned Center for Security and Emerging Technology—allegedly the largest A.I. policy center in the United States—but her résumé goes beyond that. She also has worked at the EA-founded charity evaluator GiveWell, the EA-led grantmaker Open Philanthropy Project (which has financially backed a lot of A.I.-obsessed legislative aides in Congress), and the Centre for the Governance of A.I. at Oxford University, which is an affiliate of the Center for Effective Altruism. In case that altruism wasn’t effective enough for you …

Tasha McCauley: Married to fellow effective altruist Joseph Gordon-Levitt, McCauley is currently an adjunct senior management scientist at the RAND Corporation, whose current CEO is the longtime effective altruist (and former White House adviser) Jason Matheny. McCauley also studied at a place literally called Singularity University, in case you were wondering what she thinks about A.I. Anyway, that just leaves us with …

Adam D’Angelo: The CEO of Quora, which has incorporated a lot of A.I. features over the past year, including a bot that can interact with tools like ChatGPT. Although he apparently is not as much of a “doomer” as the rest, he was convinced by Sutskever to throw Sam Altman out.

It’s important to understand who the board members are to decode what happened and why people online are acting the way they are. The effective-altruist movement, which kicked off in 2009 as a spinoff of utilitarianism devoted to raising cash hauls to spend on the most effective lifesaving causes (e.g., anti-mosquito bed nets in countries with high malaria case counts), has recently embraced a “longtermist” view that prioritizes, among a few key concerns, fear of runaway A.I. As part of a transition accelerated by the spectacular fall of famed effective altruist Sam Bankman-Fried, several prominent EAs have pivoted toward stopping the A.I. threat at all costs, whether by forming their own institutions or by advising politicians in the U.S. and the U.K.

OpenAI board members Toner and McCauley are just two of your typical EAs, but it’s necessary to mention that the ideology they follow was the rationale for OpenAI from the jump. Elon Musk, who all but identifies as a longtermist, has been a fellow A.I. “doomer” for years. His explicit goal in co-founding the OpenAI nonprofit and hiring folks like Ilya Sutskever was to develop a firm that would responsibly advance humanity toward AGI—which stands for “artificial general intelligence,” referring to machines with the reasoning capacity of the human brain—and figure out governmental regulations, systemic constraints, and other guardrails that could keep those bots from going full Terminator.

However, Altman staked out a less doctrinaire position here. While he voiced concerns over hyperintelligent computers and pitched the case for governmental oversight of his products, he also disagreed with various OpenAI employees over the pace at which they should advance their products, leading to multiple rifts along the way: in 2018, the ouster of the supercautious Musk from OpenAI’s board altogether; in 2019, the creation of a for-profit branch, with Altman as CEO, that could invite massive investment from the likes of Microsoft, causing more idealistic (read: EA-aligned) staffers to resign and form a separate A.I. firm, Anthropic, in their own image; in 2023, a continued willingness to expand OpenAI’s market-oriented ambitions (big share sales, developer conferences, building in-house chipmakers) that apparently did not sit well with Sutskever, who joined his mentor Geoffrey Hinton in endorsing a slowdown of A.I. development earlier in the year. That, on top of the conflicts that led to Sutskever’s corporate role being sidelined at OpenAI, is apparently why Altman had to go.

Who’s on the other side, again?

For a little over a year now, a subset of Tech Twitter has crafted a meme-y mockery of EA meant to symbolize its total opposite—instead of effective altruism, it’s “effective accelerationism,” which is in essence the belief that the A.I. revolution (and tech innovation in general) cannot be stopped, that to even attempt to obstruct its advancement is so foolhardy as to be dangerous, and that everyone should get on board with ushering in a beautiful Singularity future as soon as possible.

This at first gained purchase among a host of anonymous accounts with names like “swarthy,” “Based Beff Jezos,” and @bayeslord (a reference to the field of Bayesian statistics). Beginning in summer 2022, such posters shared manifestos of an “e/acc” ideology squarely opposed to A.I. fearmongerers like the effective altruists and their fellow “decels” (aka “decelerationists”). Citing eugenicist scholars from past and present—Ronald Fisher, Nick Land—these Substack screeds made the case for a transhumanist society of intergalactic splendor that can be achieved only if we stop listening to these damn EAs and take A.I.’s powers as far as possible, no matter where that ends up. Sam Altman has played digital footsie with these guys, tweeting at Beff Jezos, “You cannot outaccelerate me” and speaking to the need to “colonize space” with AGI, earning the e/accs’ devotion. And yes, they adore Elon Musk in spite of his A.I. concerns—and in spite of the fact that he was responsible for bringing Ilya Sutskever to OpenAI—because he is just as obsessed as they are with the prospect of extending humanlike consciousness out into interplanetary realms. Never mind that Musk was approvingly engaging with e/acc’s sworn enemy, EA philosopher and A.I. doomer Eliezer Yudkowsky, as recently as Monday morning.

My head hurts.

Yeah, it’s all pretty bizarre.

But … why should we care about what a bunch of nameless weirdos think?

Because their ideas have been increasingly adopted by powerful executives who are not nameless weirdos. Again, you can point to the collapse of Sam Bankman-Fried’s crypto empire as a turning point for EAs, one that cast their charitable movement into crisis and hastened a brewing pivot toward longtermist-pilled A.I. safetyism. (Sorry, I’m learning to speak like these guys do.) SBF himself was concerned with A.I. and splurged his (stolen) cash on A.I. firms like Anthropic, but the worlds of crypto and politics remained his focus. The post-SBF world, which arrived in tandem with the explosion of ChatGPT, would not be tethered to such puny earthling ventures.

As such, over the past year, a host of Silicon Valley power players (and well-known villains) has proudly and openly aligned itself with the e/acc movement: Balaji Srinivasan, Martin Shkreli, Y Combinator President Garry Tan. Most notably, Marc Andreessen published his own e/acc-aligned screed in October: “The Techno-Optimist Manifesto,” a several-thousand-word essay that declared war on the forces that would impede technological process in any way, arguing that the only way to solve our current problems is to get out of the way and keep furthering tech that will obviously and inevitably solve all our problems. In other words: It’s time to build build build build build.

As tech scholar Dave Karpf noted in a response, Andreessen’s piece was basically a retread of the 1990s’ “Californian Ideology” of nonstop progress and techno-utopianism, inspired by the expansion of the world wide web (which was spearheaded in large part by Andreessen himself) and microwaved in 2023 for the age of A.I. That hasn’t stopped e/acc from becoming a very real movement, however: As the Information reported, the effective accelerationists held their first in-person gathering on Sept. 17. Then, in an unofficial afterparty for OpenAI’s Nov. 8 DevDay, hundreds of e/accs gathered in a San Francisco nightclub for a night of music DJ’d in part by Grimes, the art-pop musician and former Musk paramour who, in spite of her own A.I. enthusiasm, didn’t hesitate to tell the crowd she wasn’t quite as e/acc-aligned as they were, preferring that there be some regulations over A.I. advancement.

My head … still hurts.

Don’t worry, I’m getting to the point. So, considering how many e/accs loved Sam Altman and his approach to A.I.—and despised the effective altruists who’ve continually urged over the past year that we either pause or altogether halt any A.I. progress—the news that the explicitly EA-aligned members of OpenAI’s board showed Altman the door confirmed to the e/accs that their EA opponents were attempting to shut down their movement. Some notable reactions:

Coinbase CEO Brian Armstrong: “Every talented employee at OpenAI should quit and join Sam/Greg’s new thing. … This time, skip the woke non-profit board, eject the decels/EAs, maintain founder control, avoid nonsensical regulation, and just build. Accelerate progress.”

Balaji Srinivasan: “every AI company now needs to choose their belief. EA or e/acc?”

@Hosseeb: “This weekend we all witnessed how a culture war is born. E/accs now have their original sin they can point back to. This will become the new thing that people feel compelled to take a side on—e/acc vs decel—and nuance or middle ground will be punished.”

@BasedDaedalus: “effective altruism is a cancerous ideology and we must fight it back with all we’ve got. drop all support to all the parasites that are doing this right fucking now.”

• Self-professed “e/acc ally” CTJ Lewis, whom Emmett Shear apparently blocked on Twitter: “we really are at war now. this is the doomer terrorist opening salvo. ‘e/acc’ must organize into an actual entity, with an actual name, and an actual mission and an actual budget.”

Basically, the e/accs see Sam Altman’s ouster from OpenAI as a shot heard ’round the world—an insurrection whose fallout shall engender a revolution of unimpeded technological acceleration.

In case you were wondering how the e/accs believe that this war is going: CNBC’s Saturday-night report that Meta had disbanded its “Responsible A.I.” team was hailed as an “e/acc cultural victory,” while the official Effective Altruism Forum appears to be in shambles over how “EA is not coming out well” and may “find itself in a hostile atmosphere in what used to be one of the most EA-friendly places in the world,” as tech researcher Nirit Weiss-Blatt uncovered. Satya Nadella, one of the very Big Tech guys whom e/accs were supposed to be aligned against, is now a hero. Oh yeah, the scene is also psyched about the recent victory of Bitcoin-loving, anti-leftist, anarcho-capitalist libertarian Javier Milei in Argentina’s presidential runoff. In a Twitter Space that Martin Shkreli set up after Altman’s fate was sealed, a bunch of e/accs gathered to agree that their ideology had come to “dominate” Silicon Valley in just a year. Their hope now is that the Altman-Microsoft team-up takes an e/acc stance, instead of the “oligopolistic accelerationist,” or o/acc, ideology that OpenAI represented before. (Namely, centralizing A.I. power among a few big companies … which the new Microsoft arrangement isn’t gonna help, but anyway.)

The one thing that’s ruining their party? Tweets and likes from OpenAI’s new interim CEO, Emmett Shear, that seem to approve of EAs’ stance toward being careful with A.I., and also appear to display a bias against e/acc.

As someone relatively uninformed before now, which side should I be rooting for here?

I personally would say … neither? Frankly, you don’t need to be e/acc-pilled to consider the EA mission to smother the A.I. Singularity a little ridiculous, not least because the resource requirements for and limited capabilities of even the most advanced A.I. make such apocalyptic predictions extremely unlikely. Then again, the e/accs hold fast to a similarly misguided belief that the only solution to our technological crises is … more technology, which should be left under the control of a bunch of eugenicist netizens who believe in dismantling government, purging any concerns around social justice from our future societies, and dismissing any and all critiques of e/acc guys from the public discourse. Sounds a little authoritarian to me, but if I say that, I guess I’m just another woke-pilled doomcel who doesn’t want to see our world blossom into … uh … well, whatever these guys want.

Why the hell did any of this happen in the first place?

I don’t know, buddy. But it’s everyone’s problem now!