Is Musk Really a Victim?
- Core Thesis: As of May 2026, the legal dispute between OpenAI and Elon Musk has revealed the complex journey of this AI giant shifting from its non-profit idealism to commercial reality. Trial evidence indicates that internal conflicts existed as early as 2017, with the non-profit mission gradually eroded by the pursuit of commercial control, computational power needs, and vast capital. The promise to "benefit all of humanity" has lacked institutional safeguards in practice.
- Key Elements:
- In 2017, OpenAI internally recognized that the non-profit structure was unsustainable for AGI research and development, initiating discussions on a for-profit model, marking the early emergence of cracks in its organizational structure.
- The trial disclosed Greg Brockman's private diary, which touched on wealth goals and anxieties regarding the ethical boundaries of "non-profit" status. The estimated value of his OpenAI equity stake is close to $30 billion.
- Elon Musk is portrayed as someone concerned with AI risk yet also driven by a desire for control. He proposed that Tesla absorb OpenAI. At the heart of the legal battle lies a conflict over mission and control.
- Sam Altman's credibility has been called into question, with several former colleagues (including Sutskever and Murati) referring to him as a "liar" in court. This has undermined his authority as a guardian of the mission.
- Microsoft's deep entanglement ($13 billion investment and control over computing power) has made it difficult for the non-profit board to exercise independent oversight after the 2023 boardroom crisis, leaving the mission drowned by commercial realities.
Original Author: Sleepy
In May 2026, at the Auckland Federal Court, OpenAI's filters were peeled away layer by layer.
What was presented to the jury was a messy Rashomon:
Greg Brockman's private diary, a mix of anxiety and calculation; Elon Musk's uncompromising grip on power; Sam Altman's integrity issues teetering on the edge; Microsoft's vast shadow over computing power and capital; and the dramatic yet hastily concluded boardroom coup at the end of 2023.
Amidst all this chaos, there was one question that sounded grand but became remarkably specific in court: When OpenAI said it would "benefit all of humanity," did that promise still hold?
As of May 15, 2026, no final verdict has been reached in this trial, and the jury's opinion remains pending. But one thing has undeniably happened: OpenAI has been dragged back from myth to reality.
In recent years, OpenAI has often been written as a story about the future. ChatGPT exploded in popularity, Altman traveled the world, and large models wormed their way into offices, schools, phones, and corporate processes. This was a company born with a quasi-religious sense of purpose, speaking of humanity's destiny, the awakening of intelligence, the boundaries of safety, and the dawn of tomorrow—like a lighthouse built ahead of time for humanity.
But the court didn't care about that. The court asked for facts.
All of Humanity Takes the Witness Stand
In 2015, OpenAI was born pristine and clean.
It claimed to be a non-profit AI research company, aiming to maximize the benefits of digital intelligence for all of humanity, free from the constraints of financial returns.
Altman and Musk were co-chairs, Brockman was CTO, and Ilya Sutskever was research director. At that time, OpenAI seemed to embody the last vestiges of Silicon Valley's golden age of idealism—where the brightest minds served humanity's future, not any single company.

A decade later, that promise was brought into the courtroom.
Musk's side argues that Altman, Brockman, and OpenAI used the non-profit mission to secure his funding and trust, only to later pivot to a for-profit structure, benefiting themselves and Microsoft.
OpenAI's side argues that Musk's money was a donation, without specific conditions; he was aware that a for-profit structure was being discussed, but he didn't get control; he's suing now out of regret for leaving and because his own company, xAI, has become a competitor.
The language on both sides is quite harsh.
Musk positions himself as the guardian of the mission. OpenAI positions him as a founder who lost control. One says, "You stole a charity," the other says, "You just couldn't control it." In the end, the most awkward part isn't which side tells a better story, but that the repeatedly invoked "all of humanity" never actually had a seat at the table.
The term "all of humanity" appeared in founding announcements, charters, speeches, and media reports, occupying the moral high ground.
But in court, it was broken down into evidence: Did Brockman's diary reflect true intent? What did the 2017 emails reveal? What exactly did the 2019 OpenAI LP transfer away? Did Microsoft's cloud and money change the company's direction? Can Altman's integrity issues sustain the company's plea of "trust us"?
The more an AI company claims to represent humanity, the more it should be asked specific questions: Which humanity are you talking about? Who signs for these people? Who can remove you? Who can audit the books? Who can say no?
The court couldn't answer these questions for the public, but it forced them out into the open.
OpenAI's story no longer looks like the growth history of a futuristic company, but rather a settling of old scores. Once the books were opened, people realized the cracks appeared long before ChatGPT's explosive success.
The Cracks of 2017
OpenAI didn't change overnight.
If you only start looking from the ChatGPT era, you might mistakenly think OpenAI was pushed by money after success, like many companies that talk ideals first and then calculate business.
But the trial rewound the clock back to 2017. Back then, OpenAI didn't have today's clout, and AGI wasn't yet a household term. But the founding team had already hit a problem: if they truly wanted to build Artificial General Intelligence, donations and passion would be far from enough.
This is the toughest moment for Silicon Valley idealism. The bigger the ideal, the bigger the bill. The bigger the bill, the harder it is for the organization to stay clean. Those grand visions of humanity spoken on stage eventually boil down to chips, servers, engineer salaries, cloud resources, and long-term capital. Without these, AGI is just a wish; with them, the non-profit structure becomes unsustainable.
In 2017, OpenAI internally discussed various paths: a for-profit affiliate, a B-corp, partnerships with existing companies, or依附于 Tesla. Musk proposed having OpenAI rely on Tesla for funding. OpenAI's side counters that Musk wasn't simply opposed to for-profit; control was his non-negotiable demand.
That year also featured a memorable scene: Dota.
After OpenAI's AI defeated top human players in Dota 1v1, the team felt for the first time, more strongly, that this thing could potentially become huge. The trial mentioned a discussion at Musk's San Francisco house, later called the "haunted mansion meeting," where they celebrated the technical breakthrough and debated whether OpenAI should go for-profit.

Many companies start reinterpreting themselves after product success. OpenAI did it earlier. Before it became the giant it is today, the founders knew that the non-profit structure couldn't sustain the AGI narrative. From the very beginning, the ideal required a heavier machine to sustain it.
Thus, an organization seemingly about scientific safety quickly entered control negotiations.
Who would hold the steering wheel? Musk or Altman? The non-profit board or future investors? Or that never-truly-present "all of humanity"?
Looking at Musk now, he was certainly an important early funder and helped build OpenAI's non-profit narrative. But he was also one of the first in this story to see the immense power AI could bring. And after seeing it, he wanted to hold onto it tightly.
Musk's Steering Wheel
In the trial, Musk repeatedly emphasized one thing: OpenAI was stolen.
This phrasing is powerful. It compresses a complex organizational shift into a simple, understandable claim. A charity meant to serve humanity turned into a massive commercial machine. It sounds like embezzlement and moral betrayal.
But the courtroom story isn't that simple.
OpenAI's lawyers focused their cross-examination of Musk on dismantling his purely innocent victim image. They presented emails and documents, asking if he knew OpenAI might need a for-profit structure, and whether he tried to absorb OpenAI into Tesla or gain dominance in other ways.
Musk didn't like being dissected this way. He said the questions were trying to "trick me." The judge repeatedly asked him to answer directly. When he tried to steer the conversation towards the existential risk of AI, the judge reminded him that the case wouldn't focus much on extinction.
These scenes are quite revealing of Musk's character.
He is accustomed to grand narratives. Humanity's destiny, AI risk, Mars, free speech, civilization's survival—these are his preferred topics. But the court demanded answers to smaller, sharper questions: When did you know? Did you agree? Did you want control? Was your money for OpenAI a donation or an investment?
The contradiction within Musk mirrors the contradiction in OpenAI's story. He might genuinely fear uncontrolled AI, and he might genuinely believe OpenAI betrayed its mission. But that doesn't preclude him from also wanting the company to operate according to his will.
The more someone believes they are saving humanity, the easier it is for them to stubbornly believe they should hold the steering wheel.
This isn't just Musk's problem. It's the undercurrent of many grand Silicon Valley narratives. They like to equate personal will with a human mission, control with responsibility, and organizational power with future necessity. Musk just expresses it more outwardly, more intensely, and more visibly.
So, in this case, Musk is not just the accuser; he is also the evidence itself.
Brockman's Diary
Greg Brockman wasn't originally the most eye-catching figure in this drama.
Musk is too dramatic, Altman too central, Sutskever too tragic, Microsoft too big. Brockman is in between—a key early founder and a crucial player in the company's daily operations. But this trial thrust him into the spotlight because his private diary became evidence.
In the second week of the trial, Brockman was repeatedly questioned about his diary, emails, and texts. Musk's side used these materials to prove that he and Altman had self-serving motives from the start. OpenAI's side argued that Musk took things out of context.
The diary contained wealth goals. Anxieties about the company's revenue path. Phrases like "making the billions." More strikingly, it included a self-reminder about not being able to "steal" the non-profit from Musk, warning of moral bankruptcy risk. Musk's lawyers latched onto these entries. Brockman denied deceiving Musk, stating these private writings were not meeting minutes but stream-of-consciousness personal notes.
A diary isn't a verdict. It doesn't directly prove fraud. It could contain raw thoughts written during moments of fatigue, anxiety, and self-analysis. Every writer knows private notes don't equate to final positions or complete facts.
However, the real significance of Brockman's diary isn't what crime it proves, but that it shows they knew where the boundaries were. The early core of OpenAI wasn't blindly stumbling into commercialization. They knew the "non-profit" label carried moral weight, knew Musk's early funding was based on trust, and knew that pivoting to another structure months later while claiming steadfast non-profit commitment would seem dishonest.
Knowing didn't mean stopping.
During the trial, Brockman disclosed that his equity stake in OpenAI was valued at nearly $30 billion.

While it's not cash or fully realized wealth—it's equity value dependent on the company's prospects and deal structure—the symbolic weight is enough. A person who once worried about moral boundaries in a private diary later sat in court, questioned about holding nearly $30 billion in equity from the same company. Public mission and private wealth were laid on the same table.
Brockman is like many key figures in successful organizations: smart, dedicated, capable, possessing a sense of shame, but also able to convince themselves, step by step, to keep moving forward.
This is the most complex part of OpenAI. It wasn't a group of bad guys plotting to destroy an ideal. It was more like a group of smart people finding justifiable reasons at every juncture to proceed, eventually carrying the initial promise into a machine they themselves couldn't fully control.
And at the center of this machine was Altman.
Altman's Debt of Trust
Sam Altman's trial wasn't just about whether any specific statement was true or false. Musk's side was fundamentally attacking his legitimacy to lead.
In their closing arguments, Musk's lawyer Steven Molo placed Altman's integrity issues at the core. He told the jury that five individuals who worked closely with Altman for years—Musk, Sutskever, Murati, Toner, McCauley—had all called him a "liar."
These five names are more important than the accusation itself.
Musk is an adversary, arguably biased. But Sutskever was OpenAI's co-founder and former Chief Scientist; Murati was CTO and briefly interim CEO in 2023; Toner and McCauley were former board members. They were people within OpenAI's internal power structure.
We cannot simply label Altman as a good or bad person.
The internal sentiment towards Altman at OpenAI is clearly complex. He propelled the institution to the center of the world, yet made some core figures uneasy. He possesses immense organizational, fundraising, media, and political acumen, which brought the company to its current position.
When the board ousted Altman in 2023, OpenAI's official reason was that his communication with the board was not "consistently candid." Days later, Altman returned. In 2024, OpenAI released a summary of the WilmerHale investigation, acknowledging a breakdown of trust between the former board and Altman, but also finding that the board acted too hastily, without notifying key stakeholders beforehand or conducting a full investigation, nor giving Altman a chance to respond.
These connected stories constitute Altman's true debt of trust.
He is not a traditional hero. He has the face of a Silicon Valley new-money figure: capable of articulating a mission, raising funds, organizing talent, managing media, negotiating with big companies, and turning a lab into a world-class corporation.
The more capable he is, the bigger the problem: If a company relies on his personal credit to assure the world "we want to benefit all of humanity," then his trustworthiness ceases to be a private character issue and becomes a matter of public governance.
Altman fought back in court, too. He claimed Musk repeatedly tried to have Tesla absorb OpenAI, which went against OpenAI's mission. He also argued that OpenAI had, in fact, created immense charitable value.
This highlights OpenAI's dilemma. It can claim it is still controlled by the non-profit, or that commercialization provides greater value for the non-profit. But for the average person, it's hard not to ask: If a public mission is to be safeguarded by a massively valuable company and a powerful CEO, is it a mission, or is it a loan of trust?
In 2023, the board tried to call in that loan. It failed.
Mission Loses to Reality
OpenAI's board isn't entirely powerless.
On paper, the non-profit board holds mission oversight. When OpenAI LP was established in 2019, OpenAI explained it as a "capped-profit" structure, where employee and investor returns were capped, with excess profits returning to the non-profit, which still held overall control. This sounded like a compromise—allowing fundraising without fully relinquishing the mission.
The problem was that reality developed much faster than the charter.
After 2019, OpenAI's binding with Microsoft deepened. Microsoft provided capital, cloud services, and supercomputing, gaining commercialization rights. Court documents show that a large portion of OpenAI's IP and employees were transferred into the for-profit entity. By the ChatGPT era, OpenAI was no longer just a research institution but a commercial system connecting users, clients, developers, cloud resources, investors, and global competition.
Such a system doesn't stop at the push of a button.
Microsoft CEO Satya Nadella was asked in court about Microsoft's $13 billion investment in OpenAI and the potential ~$92 billion return if successful. His response was essentially that if the pie grows, the non-profit also benefits.

This logic is classic: commercialization isn't a departure from the mission, but a way to expand the mission's funding sources.
Yet, within the same testimony, text messages between Nadella and Altman regarding the launch of ChatGPT's paid version were also mentioned. Nadella asked when the paid version would launch. Altman cited insufficient computing power and subpar user experience. But Nadella was impatient, saying sooner was better.
Once OpenAI was tethered to Microsoft, product timelines, customer commitments, computing constraints, and commercial returns became intertwined. The board could discuss the mission, but Microsoft had to ensure customer experience. The board could worry about safety, but users and businesses were already using the product. The board could fire the CEO, but employees, investors, partners, and public opinion would immediately push back.
Nadella's view on the 2023 board crisis was also significant. He said he wasn't given a clear reason for Altman's firing and criticized the board's handling as "amateurish." More critically, he was prepared to have Altman and other employees come to Microsoft if they couldn't return to OpenAI.
That's the reality. The non-profit board seemed to hold the steering wheel, but the engine, accelerator, fuel, and passengers were no longer solely under its control. When an AI company is tied to massive valuations, cloud providers, enterprise clients, employee stock options, and global users, the board representing the mission can't truly slam on the brakes.
The bigger the AGI narrative, the larger the computing bill. The larger the computing bill, the greater the need for cloud giants. The greater the need for cloud giants, the less the mission can be protected by a charter alone.
In the AI era, computing power isn't a backend resource. Computing power is power itself. Whoever provides the computing power participates in defining how fast a company can go, where it can go, and whom it serves. Whoever can bear the cost of a failed training run can claim returns on success. Whoever ensures continuous enterprise contracts has more say in a crisis than the board.
This trial allows us to see the whole picture clearly. It tells us that no single person destroyed the ideal. An ideal, without a sufficiently robust institutional body, will inevitably grow a skeleton of reality over time.
That skeleton isn't necessarily evil, but it


