Original author: 0xAlpha
Original editor: GaryMa Wu talks about blockchain
Recently, we watched the movie The Master of AI, a three-part drama produced by Silicon Valleys big VCs and tech giants with more than $10 billion invested, including The Master of AI, The Two Stars, The Road” and three episodes of “The Return of the King.” Many people applauded Sam Altmans return to the throne, and some even compared it to Steve Jobs return to Apple.
However, the two are simply not comparable. Masters of AI is a completely different story, one about a battle to choose between two paths: to pursue profit, or not to pursue it? this is the key of the problem!
Let’s revisit the beginning of The Lord of the Rings. When Gandalf sees the Ring at Uncle Bilbos house, he quickly realizes that such a powerful object cannot be handled by ordinary people. Only some divine and otherworldly being, like Frodo, could handle it. Thats why Frodo is the heart of the team - hes the only one who can carry such a powerful thing without being consumed by it. Not Gandalf, not Aragorn, not Legolas, not Gimli, just Frodo. The key to the entire Lord of the Rings story lies in Frodos unique nature.
Note: Sam Altman is the CEO of OpenAI, Ilya Sutskever is one of the co-founders of OpenAI (he had differences with Sam Altman on the path selection of OpenAI and was eventually marginalized), and Greg Brockman is the chief technology officer of OpenAI . Reid Hoffman is a well-known entrepreneur and venture capitalist who was the co-founder of LinkedIn. Jessica Livingston is one of the founding partners of venture capital firm Y Combinator. Peter Thiel is a well-known entrepreneur, venture capitalist, and one of the co-founders of PayPal.
Now, switch back to the beginning of Masters of AI. In 2015, Sam Altman, Greg Brockman, Reid Hoffman, Jessica Livingston, Peter Thiel, Elon Musk and a number of technology companies announced the establishment of OpenAI and committed to inject more than $1 billion into this venture capital fund. This is a group of some of the smartest brains in the world, almost as smart as Gandalf. They also knew they were building something powerful that, like the Lord of the Rings, should not be owned and controlled by anyone pursuing their own interests. It must be mastered by selfless men, like Frodo. So instead of launching a for-profit company, they established OpenAI as a non-profit research organization, presumably not for profit.
The idea that such a powerful thing should not be controlled by a company with the goal of profit may not just be the consensus of OpenAIs co-founders when it was founded. This is likely why the founders got together when they decided to form OpenAI in the first place. Even before OpenAI was founded, Google had already demonstrated the potential to wield this superpower. It seems that OpenAI is a coalition of protectors formed by these visionary protectors of humanity to fight against the AI monster that Google is turning into, a profit-seeking company. Ilyas belief in this philosophy may have been what persuaded him to leave Google to lead the development of OpenAI, because from any other perspective, Ilyas move made no sense. Back in 2015, no one offered a better AI development platform than Google. Although the founders of OpenAI are all Silicon Valley tycoons, none of them are AI practitioners (they dont code at all). Not to mention the financial disadvantage: OpenAI is clearly not as well-funded as Google. The founders promised $1 billion, but only about 10% came through ($100 million from Elon Musk or $30 million from other donors). From a personal financial return perspective, a nonprofit could not have offered Elijah better financial compensation than working at Google. The only thing that might have convinced Ilya to leave Google to lead OpenAI was this idea. Ilyas philosophical ideas are not as well known to the public as those of his doctoral supervisor. Geoffrey Hinton left Google in 2023 due to disillusionment with the politics of the Ronald Reagan era and dissatisfaction with military funding of AI.
In short, the founders want OpenAI to be their Frodo, carrying the Lord of the Rings for them.
But life in science fiction novels or movies is much easier. In the movie, the solution is very simple. Tolkien simply created the character of Frodo, a selfless guy who was able to resist the temptation of the Ring and was protected from physical attacks by the Fellowship of the Ring.
To make the character of Frodo more believable and natural, Tolkien even created a race of innocent, kind and selfless people - the Hobbits. As the quintessentially upright, kind-hearted hobbit, Frodo was the natural choice, able to resist temptations that even the wise Gandalf could not resist. If Frodos nature is attributed to the hobbits racial characteristics, then Tolkiens solution to the biggest problem of The Fellowship of the Ring is inherently racist, pinning humanitys hopes on On the noble character of a certain race. As a non-racist, while I can enjoy superheroes (or races of superheroes) solving problems in novels or movies, I cant be so naive as to think the real world is as simple as the movies. In the real world, I dont believe in this solution.
The real world is just much more complicated. Taking OpenAI as an example, most of the models built by OpenAI (especially the GPT series) are monsters of computing power and rely on power-driven chips (mainly GPUs). In a capitalist world, this means it desperately needs capital. Therefore, without the blessing of capital, OpenAI’s model would not have developed into what it is today. In this sense, Sam Altman is a key figure as the companys resource center. Thanks to Sams Silicon Valley connections, OpenAI received strong support from investors and hardware vendors.
The resources that flow into OpenAI to drive models are there for a reason - profit. Wait, isn’t OpenAI a non-profit organization? Well, technically yes, but something has changed under the hood. While maintaining its nominal non-profit structure, OpenAI is transitioning into more of a for-profit entity. This happened when OpenAI Global LLC was launched in 2019, a for-profit subsidiary set up to legally attract venture funds and give employees shares. This clever move aligns OpenAIs interests with those of its investors (not donors this time, so probably profit-seeking). Through this consistency, OpenAI can grow with the blessing of capital. OpenAI Global LLC has had a profound impact on OpenAIs growth, notably by attaching itself to Microsoft, securing a $1 billion investment (and later billions), and running OpenAIs computational monsters on Microsofts Azure-based supercomputing platform . We all know that a successful AI model requires three things: algorithm, data, and computing power. OpenAI has gathered the worlds top AI experts for the algorithms of their models (reminder: this also relies on capital, OpenAIs professional team is not cheap). ChatGPTs data mainly comes from the open Internet, so it is not a bottleneck. Building computing power in chips and electricity is an expensive project. In short, half of these three elements are primarily provided by OpenAI Global LLCs profit structure. Without this constant supply of fuel, OpenAI wouldn’t be able to get this far with donations alone.
But this comes at a cost. It is almost impossible to remain independent while being blessed by capital. What is now called a non-profit framework is more in name than substance.
There are many signs that the fight between Ilya and Sam is about this path choice: llya seems to be trying to prevent OpenAI from straying from the direction they originally set.
There is also a theory that Sam made a mistake during the so-called Q model breakthrough incident, which led to the failed coup. But I dont believe that OpenAIs board of directors would fire a highly successful CEO because he made a mistake in handling a particular problem. This so-called error in Q-model breakthroughs, if it exists, is at best a trigger.
The real problem with OpenAI may be that it has strayed from its original path. In 2018, Elon Musk parted ways with Sam for the same reason. And it seems that in 2021, the same reasons led a group of former members to leave OpenAI to start Anthropic. In addition, an anonymous letter posted by Elon Musk on Twitter at the time of the drama also pointed to this issue.
To profit or not to profit, this question seems to find the answer at the end of The Return of the King: with Sams return and Ilyas exile, the battle for the road is over. OpenAI is destined to become a de facto for-profit company (perhaps still with a non-profit shell).
But dont get me wrong. Im not saying Sam is a bad guy and Ilya is a good guy. Im just pointing out that OpenAI is caught in a dilemma, what could be called the supercompany dilemma:
A company that is run for profit can become dominated by the capital invested in it, which can present some dangers, especially if the company is building a super-powerful tool. And if it doesnt operate with the goal of making a profit, it may face a lack of resources, which in a capital-intensive space means it may not be able to build a product at all.
In fact, the creation of any super-powerful tool will raise similar concerns about control, not just in the corporate world. Take the recently released movie Oppenheimer as an example. When the atomic bomb successfully exploded, Oppenheimer felt more fear than joy. Scientists at the time hoped to establish a supranational organization to monopolize nuclear power. The idea is similar to what OpenAIs founders were thinking at the time - something as super powerful as the atomic bomb shouldnt be in the hands of a single organization, or even the U.S. government. This is not just an idea, it is implemented into action. Theodore Hall, a physicist on the Manhattan Project who leaked key details of the creation of the atomic bomb to the Soviet Union, acknowledged in a 1997 statement that the U.S. monopoly on nuclear weapons was dangerous and should be avoided. In other words, Theodore Hall helped decentralize nuclear bomb technology. Decentralizing nuclear power by leaking secrets to the Soviet Union was obviously a controversial approach (the Rosenbergs were even executed by the electric chair for leaking, despite evidence that they had been wronged), but it was reflective of the scientists of the time (including the atomic bomb Oppenheimer) consensus - such a super powerful thing should not have a monopoly! But Im not going to get into how to deal with something super powerful because thats too broad a topic. Let’s refocus our attention on the issue of ultra-powerful tools controlled by companies with a goal of profit.
So far we still havent mentioned Vitalik in the title of the article. What does Vitalik have to do with OpenAI or The Lord of the Rings?
This is because Vitalik and the founders of Ethereum were once in a very similar position.
In 2014, when the founders of Ethereum launched Ethereum, they were divided over whether the legal entity they were about to establish would be a non-profit organization or a for-profit company. The final choice, like OpenAI at the time, was a non-profit organization, the Ethereum Foundation. At that time, the differences between the founders of Ethereum were probably greater than the differences between the founders of OpenAI, leading to the departure of some founders. In contrast, establishing OpenAI as a non-profit organization was a consensus among all founders. Differences over OpenAIs path came later.
As an outsider, its unclear to me whether the disagreements among Ethereums founders are rooted in their expectations that Ethereum will become a super-powerful Lord of the Rings and therefore should not be controlled by entities with the goal of profit. But it doesnt matter. Importantly: Although Ethereum has grown into a powerful thing, the Ethereum Foundation remains a non-profit organization to this day and does not face the yes or no dilemma like OpenAI. The fact is, as of today, it doesn’t matter that much whether the Ethereum Foundation is a non-profit organization or a for-profit company. Perhaps this issue was relatively important when Ethereum was first launched, but it is no longer the case today. The powerful Ethereum itself has its own autonomous life and is not controlled by the Ethereum Foundation. During its development, the Ethereum Foundation seems to be facing financing problems similar to OpenAI. For example, I heard Xiao Feng, one of the early donors of the Ethereum Foundation, complain at a seminar that the Ethereum Foundation was too poor to provide adequate financial support to developers. I don’t know how poor the Ethereum Foundation actually is, but this financial limitation doesn’t seem to be affecting the development of the Ethereum ecosystem. In contrast, some well-funded blockchain foundations cannot develop into a prosperous ecosystem simply by burning money. In this world, capital still matters, but only to a certain extent. And in the case of OpenAI, no capital? no way!
Ethereum and artificial intelligence are of course completely different technologies. But one thing is similar: the development of both depends on a large amount of resource investment (or capital investment). (Note: Just developing the Ethereum code itself may not require much capital, but here I am referring to building the entire Ethereum system.) In order to attract such a large capital investment, OpenAI had to deviate from its original intention and quietly transform into An actually profitable company. On the other hand, despite attracting large amounts of capital into the system, Ethereum is not controlled by any profit-making organization. To be blessed by capital without being controlled by it - thats almost a miracle!
The reason Vitalik is able to do this is because Vitalik has his Frodo - the Blockchain!
Lets classify technologies into two categories based on whether they actually produce products: those that produce them and those that connect them. Artificial intelligence belongs to the former, while blockchain belongs to the latter. AI can perform many production activities, such as ChatGPT generating text, Midjourney generating images, and robots producing cars in Tesla’s unmanned factories.
Technically speaking, the blockchain does not produce anything. Its just a state machine and cant even initiate any operations on its own. But its importance as a connectivity technology lies in providing a paradigm for large-scale human collaboration beyond traditional for-profit companies. Essentially, a corporation is a contract between shareholders, creditors, board of directors, and management. The validity of a contract is that if one party breaches the contract, the other party can sue in court. The effectiveness of this prosecution lies in the fact that its results are executed by a state machine (so-called enforcement). So, fundamentally, a corporation is a contractual relationship enforced by a state machine. But now, blockchain brings us a new way of contracting that is enforced by technology. While Bitcoin’s blockchain contracts remain very feature-specific (and intentionally kept that way), Ethereum’s smart contracts extend this new way of contracting to generality. Basically, Ethereum allows humans to collaborate at scale in many areas in a completely new way, unlike the profit-driven companies of the past. For example, DeFi is a new way for people to collaborate in finance.
In this sense, blockchain is a “super company”! It is precisely because of this super company paradigm that Ethereum has been able to develop into the prosperous state it is today without having to face the corporate dilemma of OpenAI. The blockchain is Vitaliks Frodo, carrying the Lord of the Rings without being consumed by its power.
So now you can see that Frodo has been a key character behind all of these stories:
Gandalf is lucky because he has Frodo as a friend in the fantasy world.
Vitalik is also lucky because in the new world he has his Frodo - Blockchain.
Ilya and the other OpenAI founders are not so lucky, because they are in an old world where Frodo does not exist.
