Is Musk Really the Victim?
- Core Argument: As of May 2026, the legal dispute between OpenAI and Elon Musk has laid bare the complex journey of this AI giant, shifting from non-profit idealism to commercial pragmatism. Evidence from the trial reveals that internal conflicts existed as early as 2017, with the non-profit mission gradually eroded by the pursuit of commercial control, computational power, and massive capital. The promise to "benefit all of humanity" lacked institutional safeguards in reality.
- Key Elements:
- In 2017, OpenAI internally recognized the difficulty of sustaining AGI research under a non-profit structure and began discussing a for-profit model, marking the early emergence of cracks in its organizational structure.
- The trial disclosed Greg Brockman's private diary entries concerning his goals for personal wealth and anxiety over the ethical boundaries of being a "non-profit." His estimated equity value in OpenAI is close to $30 billion.
- Elon Musk is portrayed as someone who cares about AI risks but also craves control. He once proposed that Tesla absorb OpenAI, and the core of the legal battle revolves around a conflict of mission and struggle for control.
- Sam Altman's credibility is under scrutiny. Several former colleagues, including Ilya Sutskever and Mira Murati, have called him a "liar" in court, undermining his authority as the guardian of the mission.
- Microsoft's deep integration, marked by a $13 billion investment and control over computing resources, has made it difficult for the non-profit board to exercise independent oversight following the board crisis in 2023, leaving the mission submerged by commercial realities.
Original author: Sleepy
In May 2026, inside the Oakland Federal Court, the filters surrounding OpenAI were peeled away layer by layer.
What was presented to the jury was a chaotic Rashomon effect:
Greg Brockman's private diary, weaving together anxiety and calculation; Elon Musk's uncompromising grip on power; Sam Altman's integrity issues walking the line of propriety; Microsoft's vast shadow cast over computing power and capital; and the breathtaking yet abruptly concluded boardroom rebellion at the end of 2023.
Amidst this mess, there was one issue that sounded grand but became exceptionally specific in court: Did OpenAI's original promise to "benefit all of humanity" still hold true?
As of May 15, 2026, the trial has not yet reached a final verdict, and the jury's advisory opinion remains pending. But one thing has already happened: OpenAI has been dragged out of mythology and back down to earth.
In the past few years, OpenAI has often been written as a story about the future. ChatGPT went viral, Altman traveled the world, and large language models embedded themselves into offices, schools, phones, and corporate workflows. This is a company born with a quasi-religious sense of mission, speaking of humanity's destiny, the awakening of intelligence, the boundaries of safety, and the dawn of tomorrow—like a lighthouse built ahead of time for humanity.
But the court doesn't care about that. The court asks about facts.
"All of Humanity" Takes the Witness Stand
In 2015, when OpenAI was born, it was still pure.
It stated it was a non-profit AI research company, aiming to maximize the benefit of digital intelligence for all of humanity, free from the constraints of financial return pressure.
Altman and Musk were co-chairs, Brockman was the CTO, and Ilya Sutskever was the research director. At that time, OpenAI seemed to preserve the last bit of idealism from Silicon Valley's golden age—the smartest people weren't serving a particular company, but safeguarding humanity's future.

A decade later, that promise was brought into the courtroom.
Musk's side argues that Altman, Brockman, and OpenAI used the non-profit mission to secure his funding and trust, only to later shift to a for-profit structure, benefiting themselves and Microsoft.
OpenAI's side argues that Musk's money was a donation, without specific conditions; he knew early on that a for-profit structure was discussed, but he just didn't get control; his current lawsuit stems from regret over leaving and because his company xAI has become a competitor to OpenAI.
The rhetoric from both sides is ugly.
Musk positions himself as the guardian of the mission. OpenAI positions him as a founder who lost control. One says, "You stole a charity." The other says, "You just couldn't control it." Ultimately, the most awkward thing isn't which side tells a better story, but that the repeatedly invoked "all of humanity" never actually had a seat at the table.
The term "all of humanity" appears in founding announcements, charters, speeches, and media reports, occupying the moral high ground.
But in court, it was broken down into pieces of evidence: Did Brockman's diary represent true intent? What did the 2017 emails mean? What exactly did the 2019 OpenAI LP transfer? Did Microsoft's cloud services and money change the company's direction? Can Altman's integrity issues sustain the company's claim of "trust us"?
The more an AI company claims to represent humanity, the more specific the questions it should face: Who exactly is this "humanity"? Who signs for these people? Who can remove you? Who can audit the books? Who can say no?
The court didn't answer these questions for the public, but it forced them out into the open.
Thus, OpenAI's story no longer looks like the growth history of a future company, but more like a settling of old scores. Once the books are opened, people realize the cracks didn't appear only after ChatGPT went viral.
The Cracks of 2017
OpenAI didn't change overnight.
If you only look at the story from ChatGPT's launch, you'd mistakenly believe that OpenAI was driven by money after its success, like many companies that first champion ideals and then focus on business.
But the trial rewound the clock back to 2017. At that time, OpenAI didn't have today's fame, and AGI wasn't yet a buzzword on everyone's lips. However, the founding team had already encountered a problem: if they truly wanted to build Artificial General Intelligence, donations and enthusiasm alone wouldn't be nearly enough.
This is the most difficult moment for Silicon Valley idealism. The bigger the ideal, the bigger the bill. The bigger the bill, the harder it is for the organization to stay clean. The grand visions of humanity spoken on stage ultimately come down to chips, servers, engineer salaries, cloud resources, and long-term capital. Without these, AGI is just a wish; with them, non-profit status becomes unsustainable.
In 2017, OpenAI was already internally discussing various paths: a for-profit affiliate, a B-corp, partnerships with existing companies, or attaching itself to Tesla. Musk proposed letting OpenAI rely on Tesla for funding. OpenAI's side countered that Musk wasn't simply against commercialization; control was his non-negotiable demand.
That year also featured a scene worth remembering: Dota.
After OpenAI's AI defeated top human players in Dota 1v1, the team for the first time strongly realized this technology might truly have immense potential. The trial mentioned a discussion held at Musk's San Francisco house, later referred to as the "haunted mansion meeting," where they celebrated the technical breakthrough and also discussed whether OpenAI should move towards for-profit status.

Many companies start reinterpreting themselves after a product succeeds. OpenAI was earlier. Before it became the giant it is today, the founders already knew that a non-profit structure couldn't sustain the AGI narrative. From the very beginning, OpenAI's ideals required a heavier machine to support them.
Thus, an organization seemingly about scientific safety quickly entered into negotiations over control.
Who would hold the steering wheel? Musk or Altman? The non-profit board or future investors? Or the never-truly-present "all of humanity"?
Looking at Musk now, he was undoubtedly an early key funder and did participate in establishing OpenAI's non-profit narrative. But he was also one of the first in this story to see the immense power AI could bring. And after seeing it, he wanted to hold onto it tightly.
Musk's Steering Wheel
In the trial, Musk repeatedly emphasized one thing: OpenAI was stolen.
This phrasing is powerful. It compresses a complex organizational shift into a simple, understandable statement. A charity, meant to serve humanity, turned into a huge commercial machine. It sounds like embezzlement and a moral betrayal.
But the courtroom story is not that simple.
OpenAI's lawyers focused their cross-examination of Musk on dismantling his pure victim image. They presented emails and documents, questioning whether he knew early on that OpenAI might need a for-profit structure, and whether he tried to absorb OpenAI into Tesla or gain dominant control in other ways.
Musk disliked being deconstructed this way. He said in court that the opposing counsel's questions were trying to "trick me." The judge repeatedly asked him to answer directly. When he tried to steer the conversation towards AI extinction risks, the judge reminded him that the case wouldn't delve too much into extinction.
These scenes are very revealing of Musk.
He is accustomed to grand narratives. Humanity's destiny, AI risk, Mars, free speech, civilization's survival—these are his favorite topics. But the court demanded answers to smaller, sharper questions: When did you know? Did you agree? Did you want control? Was your money for OpenAI a donation or an investment?
The contradictions within Musk are precisely the contradictions of the OpenAI story. He might genuinely fear AI失控 [running out of control/runaway AI], and he might genuinely believe OpenAI has betrayed its mission. But this doesn't prevent him from also wanting the company to operate according to his will.
The more a person believes they are saving humanity, the more stubbornly they tend to think they should hold the steering wheel.
This isn't just Musk's problem. It's the undercurrent of many grand Silicon Valley narratives. They like to equate personal will with a human mission, control with responsibility, and organizational power with the needs of the future. Musk just displays it more outwardly, more intensely, and more visibly.
So, in this case, Musk is not just the accuser; he is also the evidence itself.
Brockman's Diary
Greg Brockman wasn't initially the most eye-catching figure in this drama.
Musk is too dramatic, Altman is too central, Sutskever is too tragic, Microsoft is too big. Brockman is in the middle—an early core founder of OpenAI and a key player in the company's practical operations. But this trial pushed him into the spotlight because his private diary became evidence.
During the second week of the trial, Brockman was repeatedly questioned about his diary, emails, and text messages. Musk's side used these materials to prove he and Altman had self-serving motives from early on. OpenAI's side said Musk was taking things out of context.
The diary contained wealth goals. Anxieties about the company's revenue path. Lines like "making the billions." More strikingly, it included self-reminders about the moral bankruptcy risk of "stealing" the non-profit from Musk. Musk's lawyers repeatedly seized upon these entries. Brockman denied deceiving Musk, also stating these private writings were not minutes of meetings but stream-of-consciousness personal musings.
A diary is not a verdict. It cannot directly prove they were committing fraud. It might contain rough thoughts written down when a person is tired, anxious, or self-debating. Every writer knows that private notes do not equal a final stance, let alone the complete truth.
However, the real importance of Brockman's diary isn't what crime it proves, but that it shows they knew where the boundaries were. The early core figures at OpenAI were not completely oblivious to their path towards commercialization. They knew the "non-profit" shell carried moral weight. They knew Musk's early funding came with a trust relationship. They knew that switching structures just months later while still claiming firm commitment to non-profit would seem dishonest.
Knowing doesn't mean stopping.
During the trial, Brockman disclosed that the value of his OpenAI equity holdings was close to $30 billion.

Although this figure isn't cash, or wealth already in hand, it represents the equity value based on valuation, still dependent on the company's prospects and transaction structures. But the symbolic meaning is sufficient. A person who once worried about moral boundaries in his private diary later sat in court being questioned about his OpenAI equity holdings nearing $30 billion. Public mission and private wealth were placed on the same table at that moment.
Brockman is like many key figures in excellent organizations: smart, dedicated, capable, possessing a sense of shame, and also able to convince himself step by step to keep moving forward.
This is where OpenAI's complexity lies. It wasn't a group of bad guys conspiring to destroy an ideal. It's more like a group of smart people finding reasons to proceed at every node, eventually carrying the initial promise into a machine they themselves couldn't fully control.
And at the center of this machine is Altman.


