|7 min read

The OpenAI Board Crisis: Five Days That Shook AI

Sam Altman's firing and rehiring at OpenAI exposes the tensions at the heart of AI development

Two days ago, the OpenAI board of directors fired Sam Altman as CEO. What followed was one of the most dramatic corporate crises in recent technology history, a chain of events that exposed deep tensions about how the most powerful AI technology in the world should be governed, developed, and deployed.

As I write this, the situation is still evolving. But the events so far reveal something profound about the state of AI development and the structural contradictions at the heart of the organization that has been leading it.

What Happened

On Friday, the OpenAI board announced that Sam Altman was being removed as CEO, stating that he "was not consistently candid in his communications with the board." No further details were provided. Greg Brockman, the president and co-founder, was removed from the board and subsequently resigned from the company. Mira Murati, the CTO, was named interim CEO.

The announcement sent shockwaves through the technology industry. OpenAI is arguably the most important AI company in the world. Its models power thousands of applications. Microsoft has invested billions of dollars. The company was reportedly in discussions for a share sale that would value it at over eighty billion dollars.

Within hours, the situation escalated. Reports emerged that investors, led by Microsoft, were working to reinstate Altman. Employees began threatening to resign en masse. The board's decision appeared to have been made with minimal consultation and no apparent succession plan.

By Sunday, the dynamics shifted further. Emmett Shear, the former CEO of Twitch, was named as the new interim CEO. Microsoft's Satya Nadella announced that Altman and Brockman would be joining Microsoft to lead a new advanced AI research team. And hundreds of OpenAI employees signed a letter threatening to follow Altman to Microsoft unless the board resigned and reinstated him.

The Structural Contradiction

To understand why this happened, you need to understand OpenAI's unusual corporate structure. OpenAI was founded in 2015 as a nonprofit with a mission to ensure that artificial general intelligence benefits all of humanity. In 2019, it created a "capped profit" subsidiary to attract the investment needed to fund large-scale model training. Microsoft's billions went to this subsidiary.

The result is a structure where a nonprofit board has ultimate governance authority over a company that has taken billions in commercial investment and generates significant revenue. The board's mandate is to advance the nonprofit mission, which may or may not align with maximizing commercial value.

This structure was designed to prevent the pursuit of profit from overriding safety considerations. In theory, it is an elegant solution to the alignment problem at the corporate level. In practice, it creates a situation where a small group of board members can make decisions that affect billions of dollars of investment, thousands of employees, and countless downstream applications, with minimal accountability to anyone other than the mission as they interpret it.

What It Reveals About AI Governance

The OpenAI crisis is a microcosm of a larger question that the entire AI industry needs to address: who gets to make decisions about the development and deployment of the most powerful AI systems, and according to what criteria?

The board apparently believed that Altman was moving too fast, prioritizing commercial deployment over safety considerations. Altman's supporters believe the board acted rashly, without understanding the consequences of their decision and without adequate justification.

Both positions have merit. The pace of AI deployment has been extraordinary, and there are legitimate questions about whether adequate safety testing and evaluation are happening before new capabilities are released to millions of users. At the same time, removing the CEO of the world's leading AI company with no public justification and no transition plan is not responsible governance; it is chaos.

The Microsoft Dimension

Microsoft's response has been fascinating to watch. Within forty-eight hours of the firing, Satya Nadella positioned Microsoft as the safe harbor for Altman and any employees who wanted to follow him. The message was clear: if OpenAI's board destroys the company, Microsoft will absorb the talent and continue the work.

This highlights the power dynamics at play. Microsoft has invested more than ten billion dollars in OpenAI and depends on its technology for products across the company. The idea that a six-person nonprofit board could make a decision that threatens that investment, without Microsoft having any prior knowledge, is remarkable.

It also illustrates the fundamental tension in OpenAI's relationship with Microsoft. Microsoft needs OpenAI for its commercial AI strategy. OpenAI needs Microsoft for compute and distribution. But their incentives are not perfectly aligned. Microsoft wants reliable, commercially deployable AI technology. OpenAI's mission, at least as originally conceived, is about something broader and potentially at odds with pure commercial optimization.

Impact on the AI Ecosystem

The practical impact on the AI ecosystem is significant, regardless of how the crisis resolves.

Confidence in OpenAI as a platform: Thousands of companies have built products on OpenAI's APIs. Those companies are now questioning the stability of their most critical dependency. Platform reliability is not just about uptime; it is about organizational stability and predictable governance. This crisis has damaged that trust.

Acceleration of alternatives: Every company that was considering diversifying away from OpenAI just got a powerful argument for doing so. Anthropic, Google, and the open source model ecosystem will benefit from organizations seeking to reduce their dependence on a single, apparently unstable provider.

The safety debate: The crisis has brought the AI safety debate into mainstream consciousness in a way that technical papers and policy discussions never did. Whether the board's actions were motivated by genuine safety concerns or internal politics (or both), the public is now aware that there is a real tension between the commercial imperative to deploy AI quickly and the cautionary imperative to deploy it safely.

My Perspective

From where I sit, building AI capabilities at a major entertainment company, this crisis reinforces several convictions.

First, multi-provider strategy is not optional. Depending entirely on one AI provider, especially one with the governance uncertainties that OpenAI has now demonstrated, is an unacceptable risk for any serious enterprise deployment. We need to be able to work with multiple model providers and be prepared to switch between them.

Second, the case for open source models just got stronger. Llama 2 and its successors offer a path to AI capability that does not depend on the internal politics of any single company. The infrastructure investment required to run your own models is real, but the risk reduction may justify it.

Third, the AI safety question is not going away, and it should not. The fact that a disagreement about safety and deployment pace can trigger a crisis of this magnitude tells you how high the stakes are. As AI capabilities continue to advance, the governance and safety questions will only become more urgent.

Watching and Waiting

As I write this, the situation remains fluid. Altman may return to OpenAI. The board may be reconstituted. The company may fracture. Or some resolution I cannot anticipate may emerge.

Whatever happens, the OpenAI board crisis of November 2023 will be studied for years as a case study in AI governance, corporate structure, and the tensions between safety, speed, and commerce in the development of transformative technology.

The code I am writing today against OpenAI's APIs still works. The models still respond. The applications still function. But the ground underneath has shifted, and everyone building on this platform is now thinking about what their contingency plan looks like.

That is a healthy thing, even if the circumstances that prompted it are not.

Share: