Why Do So Many People in the US Dislike Sam Altman?
- Core Perspective: The Musk v. OpenAI case has entered trial in the California federal court, with the core dispute being whether OpenAI violated its non-profit promises made in 2019 under a "profit-limited" structure. The case concerns not only $134 billion in damages but also tests whether Silicon Valley startups can legitimately transform a "non-profit" narrative tool into a for-profit entity within a decade.
- Key Elements:
- The case focuses on two charges: unjust enrichment and breach of charitable trust. The remaining 24 charges have been dismissed or withdrawn, centering on the argument that "OpenAI once promised to be perpetually non-profit, but now it is not."
- On the day of jury selection, OpenAI announced a new agreement with Microsoft, canceling Microsoft's exclusive license to OpenAI's intellectual property, marking the removal of the last lock in the 2019 "self-restraint checklist."
- In 2019, OpenAI set three locks: a profit cap, an AGI trigger clause (terminating Microsoft's commercial license), and Microsoft's exclusive license. All have now been dismantled, transitioning to an unrestricted for-profit model.
- The jury will only participate in the first phase of liability determination (before mid-May), providing advisory recommendations; the final verdict rests with the judge. Musk's goal is more about winning a "narrative war" to prove OpenAI systematically dismantled its promises.
- OpenAI's strategy is to prove that Musk's lawsuit stems from competitive jealousy rather than breach of trust. It plans to have Musk testify under oath during the trial, portraying him as the "xAI founder who lost to OpenAI."
- Criticism of Altman comes from three groups: the old board (lack of transparent communication and concealment of financial interests), the safety faction (safety culture being subordinated to product), and the Silicon Valley contract faction (early donors and those who believed in the non-profit mission), who believe that his "for the mission" dismantling of locks masks a betrayal of commitments.
- The case's outcome will impact the industry: if Musk wins, it strengthens the legal weight of early promises; if OpenAI wins, it implicitly endorses the "non-profit" as a cheap narrative tool for later conversion to for-profit.
The jury took their seats in Courtroom 9 of the federal courthouse in Oakland, California, yesterday. Nine individuals empaneled as an "advisory jury" will observe a trial expected to last four weeks, ultimately delivering a recommendation to Judge Rogers. Today, Tuesday, opening statements are about to begin.
On the same day the jury selection took place, OpenAI announced a newly revised agreement with Microsoft. This agreement effectively eliminated one thing: Microsoft's exclusive license to OpenAI's intellectual property is gone. This was precisely the final lock OpenAI placed on itself when it transitioned to a "capped-profit" structure in 2019.
What Exactly is Musk Suing Over?
Reuters reports and CNBC's trial diary compiled a case outline two weeks before the trial began. When Elon Musk initially filed the lawsuit in 2024, he listed 26 counts, ranging from securities fraud and RICO (Racketeer Influenced and Corrupt Organizations Act) violations to antitrust claims. As the trial commences today, only two counts remain: unjust enrichment and breach of charitable trust.
The remaining 24 counts were either dismissed by the judge during the motion phase or voluntarily withdrawn by Musk. Days before the trial, he proactively dropped some allegations related to "fraud," focusing the case on its core and simplest assertion: "OpenAI promised me it would always be a non-profit." Now, it isn't.
For this single assertion, Musk is seeking damages of up to $134 billion. According to his complaint, any compensation would be returned to OpenAI's non-profit arm, but he demands the removal of Sam Altman and Greg Brockman and the reversal of the entire for-profit conversion. This is the "true core" of the lawsuit. The prize isn't stock allocation. It's determining who the OpenAI entity ultimately belongs to.
Judge Gonzalez Rogers has divided the trial into two phases. The first phase focuses on determining liability and is expected to conclude by mid-May. If liability is established, the second phase will address damages. The jury participates only in the first phase and serves in an advisory capacity. The final verdict rests with the judge. This means for Musk, winning the "narrative war" is more critical than winning "damages." He needs to convince the jury that "this company made promises to its donors back then and then systematically dismantled those promises." If these nine people agree, the judge will piece together the remaining picture for him.
OpenAI's strategy is nearly a mirror image. They aim to convince the jury that Musk's true motive for the lawsuit is competitive jealousy, unrelated to any breach of trust. OpenAI's official account fired the first shot on the day of jury selection: "We can't wait to present our evidence in court. The truth and the law are on our side. This lawsuit has always been a baseless, jealous competitive attack... We finally have the opportunity to depose Elon Musk under oath in front of a California jury."

Note the phrase "depose Elon Musk under oath." This is a strategy. What OpenAI truly wants is to portray Musk, on the public court of X, as the "xAI founder who lost to OpenAI." Convincing the judge is secondary. The goal is to ensure the average California resident on the jury enters the courtroom with this filter.
How Were OpenAI's "Locks" Dismantled?
To understand why Musk is so furious, one must first understand the three locks OpenAI placed on itself in 2019, each with a clear design intention.

You'll notice a pattern. In 2019, OpenAI was proving to donors that "even if we make money, the amount is limited and must stop at a certain point." On April 27, 2026, OpenAI is proving to investors that "we have no brakes."
The explanation for the profit cap is the most direct. In a 2025 employee letter, Sam Altman wrote, "The 'capped-profit' structure makes sense in a world with only one AGI company but is no longer applicable when multiple companies are competing." In plain English: There are competitors now, so I need to be able to earn more.
The dismantling of the AGI trigger clause is the most subtle. Originally, "achieving AGI terminates Microsoft's commercial license" meant AGI is a public good for humanity, and OpenAI wouldn't privatize it. After the revision, an "independent expert panel" oversees the definition of AGI. Microsoft's license extends to 2032, explicitly "covering post-AGI models," and Microsoft is permitted to independently pursue AGI. This is a version where even the key to "defining who achieved AGI" has had its lock cylinder changed.
The final lock was the exclusive license. Its dismantling occurred just as Musk's jury was being seated. Decoupling Microsoft's share from "OpenAI's technical progress" means that even if OpenAI announced tomorrow that it has achieved AGI, no commercial terms would be triggered for change.
Musk's side will argue this was a deliberate dismantling of protective mechanisms. OpenAI's side will argue it was a necessary adjustment for a competitive environment. But there is one thing neither side will dispute: that "self-restraint checklist" from 2019 no longer exists today.
"Scam Altman": Why Do So Many People Dislike Altman?
On the day of jury selection, X was far more active than the courtroom. Two hours after the OpenAI official account opened fire, Musk fired back with seven consecutive tweets. Fast pace, heavy words, dense rhythm. A classic Musk rapid-fire mode. He gave Altman a nickname: Scam Altman.
He also retweeted a video clip of former OpenAI board member Helen Toner, where she stated verbatim in a podcast, "Sam is a liar."

"Sam is a liar." This isn't a phrase Musk originally coined. Former OpenAI CTO Mira Murati said it when she left. Ilya Sutskever said it during the "failed coup" to unseat Altman. Jan Leike said it publicly when he resigned along with the entire superalignment team.
There are actually three groups of people who dislike Sam Altman, each for different reasons.
The first group is the old OpenAI board. Their defining moment was the five-day firing saga in November 2023. The board's stated reason was that Altman was "not consistently candid in his communications with the board."
What exactly did they catch him on? In May 2024, Helen Toner publicly stated the board found out about the launch of a product that would reshape the global AI industry from Twitter. She also claimed Altman concealed his ownership of the OpenAI Startup Fund, repeatedly stating publicly that he had "no financial interest in the company" until he was forced to admit it in April 2024.
He also allegedly provided inaccurate information to the board regarding safety processes on multiple occasions. Two senior executives reported Altman's "psychological abuse" to the board, providing evidence of "lying and manipulation." After Toner published a research paper OpenAI disliked, Altman allegedly tried to push her off the board.

The second group is the old OpenAI safety faction.
In May 2024, OpenAI's "superalignment team" practically disbanded overnight. The leader of the exodus was Jan Leike, one of OpenAI's most senior AI safety researchers. His resignation letter on X was one of the sharpest departure notices in the English AI community that year, stating that "safety culture and processes have taken a backseat to shiny products."
He was followed by Ilya Sutskever, OpenAI co-founder, chief scientist, and a key instigator of the failed coup. Then, CTO Mira Murati (who temporarily took over the company after Altman was fired), Chief Research Officer Bob McGrew, and VP of Research Barret Zoph all resigned within the same week. The "non-disparagement agreement" scandal emerged soon after, where departing employees were allegedly required to sign such agreements or forfeit equity.

The third group is the "contractualist" faction of old Silicon Valley. This group is the hardest to define and also the largest.
It includes early donors like Musk from 2015, early OpenAI employees who genuinely believed in the "non-profit mission," many angel investors who bet on early-stage startups in Silicon Valley, and a significant number of neutral observers who considered OpenAI "a collective asset of humanity."
What unites this group is that they all paid a non-monetary price for OpenAI's promises: reputation, time, trust, social capital. What they find hardest to forgive Altman for is very specific: every time OpenAI dismantled one of its "locks," Altman said, "This is for the mission."
When the profit cap was removed, he said, "This is to allow OpenAI to continue investing in AGI research." When the AGI trigger clause was rewritten, he said, "This is to allow OpenAI to fulfill its mission even after achieving AGI." When the Microsoft exclusivity clause was removed, he said, "This is to allow OpenAI to move towards a broader collaborative ecosystem."
This is why a portion of Silicon Valley is reluctantly siding with Musk in this lawsuit.
The Weight of a Promise in Silicon Valley: The Verdict in Four Weeks
After laying this all out, you can probably see clearly now. This isn't a fight over money.
Money is OpenAI's concern. In 2026, Altman is the CEO of a private AI company valued at over $500 billion; he doesn't lack funds. Musk, with xAI in 2026, has already reached the Grok 5 era, with Anthropic as his target to chase and OpenAI as his target to surpass; he certainly doesn't lack money either.
They are fighting over something that only a few long-time Silicon Valley participants truly care about: Can a non-profit organization that raises funds from society, accumulates moral capital, recruits talent, and secures regulatory exemptions in the name of "humanity's collective benefit" [


