Chắc hẳn mọi người đều thắc mắc: Elon Musk rốt cuộc có phải là nạn nhân hay không?
- Quan điểm cốt lõi: Tính đến tháng 5 năm 2026, vụ kiện tụng pháp lý giữa OpenAI và Elon Musk đã phơi bày quá trình phức tạp của gã khổng lồ AI này khi chuyển mình từ chủ nghĩa lý tưởng phi lợi nhuận sang thực tế thương mại. Các bằng chứng từ phiên tòa cho thấy những xung đột nội bộ từ sớm (năm 2017) đã tồn tại, và sứ mệnh phi lợi nhuận dần bị bào mòn bởi khát vọng kiểm soát thương mại, nhu cầu sức mạnh tính toán và nguồn vốn khổng lồ, trong khi lời hứa "mang lại lợi ích cho toàn nhân loại" lại thiếu đi sự đảm bảo về mặt thể chế trong thực tế.
- Các yếu tố then chốt:
- Năm 2017, nội bộ OpenAI đã nhận ra cấu trúc phi lợi nhuận khó có thể hỗ trợ việc nghiên cứu và phát triển AGI, và bắt đầu thảo luận về các phương án thương mại hóa, đánh dấu sự xuất hiện sớm của các vết nứt trong cấu trúc tổ chức.
- Phiên tòa đã tiết lộ nhật ký cá nhân của Greg Brockman, trong đó đề cập đến các mục tiêu về tài sản và sự lo lắng về ranh giới đạo đức của "phi lợi nhuận", với giá trị cổ phần nắm giữ tại OpenAI của ông được ước tính lên tới gần 30 tỷ USD.
- Elon Musk được mô tả là người vừa quan tâm đến rủi ro của AI, vừa khao khát quyền kiểm soát; ông từng đề xuất để Tesla tiếp quản OpenAI, và cốt lõi của cuộc chiến pháp lý là sự bất đồng về sứ mệnh và tranh giành quyền kiểm soát.
- Vấn đề về sự trung thực của Sam Altman bị đặt làm tâm điểm; nhiều đồng nghiệp cũ (bao gồm Sutskever và Murati) đã gọi ông là "kẻ lừa dối" trước tòa, điều này đã làm lung lay tư cách thống trị của ông với tư cách là người bảo vệ sứ mệnh.
- Sự ràng buộc sâu sắc của Microsoft (khoản đầu tư 13 tỷ USD) và quyền kiểm soát sức mạnh tính toán đã khiến hội đồng quản trị phi lợi nhuận khó có thể thực hiện quyền giám sát độc lập sau cuộc khủng hoảng hội đồng quản trị năm 2023, và sứ mệnh đã bị nhấn chìm bởi thực tế thương mại.
Original author: Sleepy
In May 2026, inside the Federal Court in Oakland, OpenAI's carefully constructed image was systematically dismantled.
What unfolded before the jury was a muddled Rashomon: the private diary of Greg Brockman, a mix of anxiety and calculation; Elon Musk's uncompromising grip on power; Sam Altman's integrity issues, constantly skirting the line; Microsoft's massive shadow cast in computing power and capital; and the dramatic yet abruptly concluded boardroom coup at the end of 2023.
Amidst this mess, one question, grand in scope yet painfully specific in court, emerged: When OpenAI claimed it would "benefit all of humanity," did that promise still hold true?
As of May 15, 2026, no final verdict has been reached, with the jury's deliberation still ongoing. But one thing is certain: OpenAI has been dragged back down to earth from the realm of mythology.
In recent years, OpenAI has often been portrayed as a story about the future. The explosion of ChatGPT, Altman's global tours, and the integration of large models into offices, schools, phones, and corporate workflows. This was a company born with a quasi-religious sense of higher purpose, speaking of humanity's destiny, the dawn of intelligence, the boundaries of safety, and the promise of tomorrow, like a lighthouse built for humanity's future in advance.
But the court doesn't care about stories. The court deals in facts.
"All of Humanity" Takes the Witness Stand
In 2015, when OpenAI was born, it was unblemished.
It declared itself a non-profit AI research company, aiming to maximize the benefits of digital intelligence for all of humanity, unconstrained by the pressure of financial returns.
Altman and Musk were the co-chairs, Brockman the CTO, and Ilya Sutskever the head of research. OpenAI at that time seemed to embody the last vestiges of idealism from Silicon Valley's golden age—the brightest minds serving not a single company, but safeguarding humanity's future.

A decade later, this promise was brought into the courtroom.
Musk's side argued that Altman, Brockman, and OpenAI used the non-profit mission to obtain his funding and trust, only to later pivot to a for-profit structure, benefiting themselves and Microsoft.
OpenAI's side countered that Musk's money was a donation without specific conditions; he was aware of discussions about a for-profit structure and was simply upset about not gaining control; his current lawsuit stems from regret over leaving and because his company, xAI, is now a competitor to OpenAI.
The arguments from both sides were ugly.
Musk positioned himself as the guardian of the mission. OpenAI painted him as a founder who lost control. One said, "You stole a charity." The other said, "You just couldn't control it." Ultimately, the most awkward part wasn't who told a better story, but the fact that "all of humanity," so frequently invoked, never actually had a seat at the table.
The phrase "all of humanity" appeared in founding announcements, charters, speeches, and media reports, occupying the moral high ground.
But in court, it was dissected into pieces of evidence: Did Brockman's diary reflect true intentions? What did the 2017 emails reveal? What exactly did the 2019 OpenAI LP transfer? Did Microsoft's cloud and money steer the company in a different direction? Could Altman's integrity issues sustain the company's plea to "trust us"?
The more an AI company claims to represent humanity, the more specific the questions it should face: Which humans do you mean? Who signs for them? Who can remove you? Who can audit the books? Who can say no?
The court couldn't answer these questions for the public, but it forced them into the open.
Consequently, the narrative of OpenAI ceased to be a growth story of a futuristic company and became more about settling old scores. With the books laid bare, people realized the cracks didn't just appear after ChatGPT's success.
The Cracks of 2017
OpenAI didn't suddenly change.
If you only look at the story starting from ChatGPT, you might mistakenly think OpenAI was simply driven by money after success, like many companies that first talk about ideals and then focus on business.
But the trial pushed the timeline back to 2017. Back then, OpenAI didn't have today's prominence, and AGI wasn't a buzzword on everyone's lips. Yet the founding team had already encountered a problem: if they truly wanted to build general artificial intelligence, donations and passion alone would never be enough.
This is the most difficult moment for Silicon Valley idealism. The bigger the ideal, the bigger the bill. The bigger the bill, the harder it is for the organization to remain pure. All those speeches on stage about benefiting humanity eventually boil down to chips, servers, engineer salaries, cloud resources, and long-term capital. Without these, AGI remains a wish. With them, the non-profit structure becomes unsustainable.
In 2017, OpenAI was already internally discussing various paths: a for-profit affiliate, a B-corp, partnerships with existing companies, or dependence on Tesla. Musk proposed OpenAI relying on Tesla for funding. OpenAI's side retorted that Musk wasn't simply opposed to for-profit status; control over the company was his underlying demand.
A particularly memorable scene from that year involves Dota.
After OpenAI's AI defeated top human players in a Dota 1v1 match, the team felt a stronger conviction that this technology could become truly massive. The trial mentioned a discussion at Musk's house in San Francisco, later referred to as the "haunted mansion meeting," where they celebrated the technical breakthrough and debated whether OpenAI should become for-profit.

Many companies start reinterpreting their story after product success. OpenAI did it earlier. Before it became the behemoth it is today, the founders already knew that a non-profit structure couldn't sustain the AGI narrative. From the very beginning, OpenAI's idealism required a much heavier machine to fuel it.
Thus, an organization ostensibly about scientific safety quickly entered negotiations about control.
Who would hold the steering wheel? Musk or Altman? The non-profit board or future investors? Or 'all of humanity,' which never truly made an appearance?
Looking at Musk in this light, he was, of course, an early crucial backer who helped establish OpenAI's non-profit narrative. But he was also one of the first in this story to recognize the immense power AI could bring. And upon recognizing it, he too wanted to hold it tightly.
Musk's Steering Wheel
Throughout the trial, Musk repeatedly emphasized one thing: OpenAI was stolen.
This phrasing is powerful. It compresses a complex organizational shift into a simple story everyone understands. A charity, meant to serve humanity, turned into a massive commercial machine. It sounds like property theft and a moral betrayal.
But the court's story is not that simple.
OpenAI's lawyer focused on dismantling Musk's image as a mere victim during cross-examination. Using emails and documents, the lawyer pressed him on whether he knew OpenAI might need a for-profit structure and whether he had tried to absorb OpenAI into Tesla or otherwise gain control.
Musk disliked being dissected this way. He complained that the questions were an attempt to "trick me." The judge repeatedly asked him to answer directly. When he tried to steer the conversation towards the existential risks of AI, the judge reminded him that the case wouldn't focus much on human extinction.
These moments are very revealing about Musk.
He is used to grand narratives. The fate of humanity, the risks of AI, Mars, free speech, the survival of civilization – these are his favorite topics. But the court demanded answers to smaller, sharper questions: When did you know? Did you agree? Did you want control? Was your money for OpenAI a donation or an investment?
The contradiction within Musk is the contradiction of the OpenAI story itself. He might genuinely fear uncontrolled AI, and he might genuinely believe OpenAI betrayed its mission. But that doesn't negate the fact that he also wanted the company to operate according to his will.
The more a person believes they are saving humanity, the easier it is for them to stubbornly insist they should be the one holding the steering wheel.
This isn't just Musk's problem. It's the underlying color of many grand Silicon Valley narratives. They tend to frame personal will as a human mission, the desire for control as a sense of responsibility, and organizational power as a necessity for the future. Musk just expresses it more outwardly, more intensely, and makes it easier to see.
So, in this case, Musk isn't just the accuser. He is also the evidence.
Brockman's Diary
Greg Brockman wasn't originally the most prominent figure in this drama.
Musk is too dramatic, Altman is too central, Sutskever is too tragic, and Microsoft is too big. Brockman is caught in the middle, an early core founder who also plays a key role in the company's daily operations. But this trial pushed him into the spotlight as his private diary was submitted as evidence.
In the second week of the trial, Brockman was repeatedly questioned about his diary, emails, and text messages. Musk's side used these materials to argue that he and Altman had self-serving motives from the start. OpenAI's side said Musk was taking things out of context.
The diary contained wealth goals. Anxieties about the company's revenue path. Phrases like "making the billions." Even more glaring were self-reminders about not "stealing" the non-profit from Musk, lest they risk moral bankruptcy. Musk's lawyers seized upon these entries. Brockman denied deceiving Musk, arguing that these private writings weren't meeting minutes but stream-of-consciousness personal notes.
A diary is not a verdict. It doesn't directly prove fraud. It might also contain the rough thoughts of a tired, anxious person reasoning with themselves. Any writer knows that private notes don't equal a final position or the complete truth.
However, the crucial point of Brockman's diary isn't what crime it proves. It's that it shows they knew where the boundaries were. The early core of OpenAI wasn't completely unaware or innocent as they moved towards commercialization. They knew the "non-profit" shell carried moral weight. They knew Musk's early funding was based on trust. They knew that pivoting to a different structure in a few months, while still claiming firm dedication to non-profit status, seemed dishonest.
They knew. But they didn't stop.
During the trial, Brockman disclosed that his equity stake in OpenAI was valued at nearly $30 billion.

Of course, this isn't cash or wealth already in hand. It's the equity value based on valuation, dependent on the company's prospects and transaction structure. But the symbolic meaning is sufficient. A man who once worried about moral boundaries in his private diary later sat in court, asked about his nearly $30 billion stake in OpenAI. The public mission and private wealth were laid out on the same table at that moment.
Brockman is like a key figure in many great organizations: smart, dedicated, capable, with a sense of shame, but also capable of gradually convincing himself to move forward.
This is the most complex part of OpenAI. It wasn't a group of villains conspiring to destroy an ideal. It was a group of smart people who, at every juncture, could find a reason to keep going, eventually feeding their initial promise into a machine they themselves couldn't fully control.
And at the center of this machine was Altman.
Altman's Debt of Trust
During this trial, Sam Altman was grilled not just on the truth of specific statements. Musk's side was truly attacking his legitimacy to lead.
In his closing argument, Musk's lawyer Steven Molo placed Altman's integrity issues at the core. He told the jury that five people who had worked closely with Altman for years – Musk, Sutskever, Murati, Toner, and McCauley – had all called him a "liar."
These five names are more important than the accusation itself.
Musk is an opponent and can be seen as having a conflict of interest. But Sutskever is a co-founder and former chief scientist of OpenAI. Murati was the CTO and briefly the interim CEO in 2023. Toner and McCauley are former board members. These are people within OpenAI's internal power structure.
We cannot simply label Altman as a good or bad person.
The feelings towards Altman within OpenAI are clearly complex. He could push the organization to the center of the world stage, yet also made some core figures uneasy. He possesses immense organizational, fundraising, media, and political skills, which is why the company is where it is today.
When the board ousted Altman in 2023, the official reason was that his communication with the board was "not consistently candid." Days later, Altman returned. In 2024, OpenAI released a summary of the WilmerHale investigation, acknowledging a breakdown of trust between the former board and Altman, but also stating that the board acted too hastily, without notifying key stakeholders, conducting a full investigation, or giving Altman a chance to respond.
These stories together form Altman's real debt of trust.
He is not a traditional hero. He has the face of a Silicon Valley new money: capable of articulating a mission, raising funds, organizing talent, handling the media, negotiating with big companies, and turning a lab into a world-class firm.
The greater his ability, the bigger the problem: If a company uses his personal credit to guarantee to the world that it will "benefit all of humanity," then his credibility is no longer a private character issue, but a matter of public governance.
Altman fired back in court too. He stated that Musk repeatedly tried to have Tesla absorb OpenAI, which was against OpenAI's mission. He also argued that OpenAI had, in fact, created immense charitable value.
This is the dilemma of OpenAI. It can say it is still controlled by the non-profit corporation, or that commercialization allows the non-profit to have even greater value. But hearing this, an ordinary person can't help but ask: If the public mission relies on a hugely valuable company and a powerful CEO to safeguard it, is it truly a mission, or is it a loan of trust?
In 2023, the board tried to call in this loan. It failed.
Mission Loses to Reality
OpenAI's board isn't completely powerless.
On paper, the non-profit board holds the power to oversee the mission. When OpenAI LP was established in 2019, OpenAI explained it as a "capped-profit" structure, where employee and investor returns were capped, with excess profits flowing back to the non-profit, which remained in overall control. This design sounded like a compromise – allowing fundraising without fully surrendering the mission.
The problem was that reality developed much faster than the charter.
After 2019, the binding between OpenAI and Microsoft deepened. Microsoft provided capital, cloud computing, and supercomputing resources, gaining commercialization rights. Court documents show that a large portion of OpenAI's IP and employees were transferred to the for-profit entity. By the ChatGPT era, OpenAI wasn't just a research institution; it was a commercial system connecting users, customers, developers, cloud resources, investors, and global competition.
Such a system can't be stopped at the push of a button.
Microsoft CEO Satya Nadella was asked in court about Microsoft's $13 billion investment in OpenAI and the potential ~$92 billion return if successful. His answer, in essence, was that if the pie gets bigger, the non-profit also benefits.

This logic is classic: commercialization isn't a departure from the mission; it's a way to expand the mission's funding sources.
But in the same testimony, text messages between Nadella and Altman about the launch of ChatGPT's paid version were also mentioned. Nadella asked when the paid version would launch. Altman said computing power was insufficient, the experience wasn't good enough. But Nadella was eager, saying faster was better.
Once OpenAI and Microsoft were bound together, product pace, customer commitments, computing constraints, and commercial returns became intertwined. The board could discuss the mission, but Microsoft had to ensure customer experience. The board could worry about safety, but users and enterprises were already using it. The board could fire the CEO, but employees, investors, partners, and public opinion would immediately push back.
Nadella's view on the 2023 board crisis was also significant. He stated he wasn't given a clear reason for Altman's firing and criticized the board's handling of it as "amateur city." More importantly, he was already prepared to let Altman and other employees come to Microsoft if they couldn't return to OpenAI.
This is the reality. The non-profit board might appear to hold the steering wheel, but the engine, accelerator, fuel, and passengers are no longer solely under its control. When an AI company is tied to massive valuations, a cloud giant, enterprise clients, employee stock options, and global users, a board representing a mission finds it very difficult to truly hit the brakes.
The bigger the AGI narrative, the bigger the computing bill. The bigger the computing bill, the greater the need for a cloud giant. The greater the reliance on a cloud giant, the less likely the mission can


