Original title: My techno-optimism
Original author: vitalik
Original compilation: Luccy, BlockBeats
Editors note: It is a commonplace to talk about the impact that technological progress will have on the future. Last month, Marc Andreessens Declaration of Technological Optimism clearly opposed the fear of technological progress. In addition, this months controversy caused by OpenAI, the topic has once again aroused heated discussion.
Vitalik Buterin opposed the mentality of prioritizing maintaining the current state of the world and pointed out that technological development not only needs to focus on intensity, but also on direction. In his latest article, he deeply discussed the intersection of artificial intelligence and blockchain, and proposed the concept of d/acc (decentralized acceleration). He believes that with the accelerated development of technology, the emergence of artificial intelligence may become one of the biggest challenges facing mankind, but it also proposes a more humane and decentralized development path.
The topics covered in the article cover information defense, social technology, human-machine cooperation and other aspects, highlighting the importance of actively guiding the direction of technological development. In a different philosophical context, Vitalik advocates an attitude that is both positive and cautious, calling on developers to pay more attention to choices and intentions when building technology to ensure that the development of technology is in line with human values and long-term interests.
Special thanks to Morgan Beller, Juan Benet, Eli Dourado, Sriram Krishnan, Nate Soares, Jaan Tallinn, Vincent Weisser, Balvi volunteers, and others for their feedback and reviews.
Last month, Marc Andreessen published his Techno-optimist Manifesto, calling for a renewed enthusiasm for technology while advocating the use of market and capitalist means to build technology and push humanity toward a brighter future. The manifesto explicitly rejects the so-called stagnation mentality, which is the fear of technological progress and prioritizes keeping the world as it is today. The manifesto received widespread attention, including response articles from Noah Smith, Robin Hanson, Joshua Gans (positive), Dave Karpf, Luca Ropek, Ezra Klein (more negative), and many others. Although not directly related to the manifesto, similar themes include James Pethokoukis Conservative Futurism and Palladiums Its Time to Build for Good. This month, with the OpenAI dispute, were seeing a similar debate, with much of the discussion focused on the dangers of superintelligent AI and the possibility that OpenAI may be moving too fast.
My sense of technological optimism is warm and nuanced. I believe that the future will be brighter than the present because of radically changing technologies, and I also believe in humans and humanity. I object to the mentality that we should try to keep the world basically the same as it is today, just with less greed and more public health care. However, I believe that not only degree is important, but direction is also crucial. There are certain types of technology that are more reliable at making the world a better place, and some types of technology, if developed, may mitigate the negative impacts of other types of technology. The world is overinvested in some directions of technological development and underinvested in others. We need to consciously choose the direction we want because the formula of maximizing profits does not automatically lead to these directions.
In this article, I’ll explore what techno-optimism means to me. This includes the broader worldview that inspires my work on certain types of blockchain and cryptography applications, social technology, and other scientific areas in which I have expressed interest. But different perspectives on this broader issue also have important implications for artificial intelligence, as well as many other fields. Our rapid advances in technology are likely to become the most important social issues of the 21st century, so it is important to think carefully about them.
Technology is amazing, and delaying its development can be extremely costly
In some areas, the benefits of technology are generally underestimated, viewing it primarily as a utopia and a source of potential risk. Over the past half century, this perception has often stemmed from environmental concerns or concerns that the benefits of technology will flow only to the rich, thereby consolidating their power over the poor. Lately, Ive also noticed that some libertarians are worried about certain technologies that may lead to the concentration of power. This month, I conducted some research and asked the following question: If a technology needed to be restricted because it was too dangerous, would they rather have it monopolized or delayed for a decade? To my surprise, across three platforms and three options regarding monopolist status, people unanimously and strongly chose deferral.
So I sometimes worry that we may have overcorrected, and that many people are missing another side of the debate: that the benefits of technology are indeed huge, and in areas where we can measure them, the positive effects far outweigh the negative ones, and even if they are delayed by a decade The cost is also incalculable.
As a concrete example, let’s look at a life expectancy graph:
What do we see? Over the past century, truly tremendous progress has been made. This applies to the entire world, both historically wealthy and dominant regions and poor and exploited regions.
Some blame technology for creating or exacerbating disasters, such as totalitarianism and war. In fact, we can see on the graph the deaths caused by war: during World War I (1910s) and during World War II (1940s). If you look closely, you can also see non-military disasters such as the Spanish Flu and the Great Leap Forward. But the graph makes one thing clear: Even as horrific as those disasters were, they were overwhelmed by the sheer scale of the improvements made in food, sanitation, medicine, and infrastructure during that century.
This is consistent with significant improvements in our daily lives. Thanks to the Internet, most people around the world now have easy access to information that was inaccessible just two decades ago. The global economy has become more accessible due to improvements in international payments and finance. Global poverty is declining rapidly. With online maps, we no longer have to worry about getting lost in the city, and we now have an easier way to hail a ride if we need to get home quickly. Our possessions have become digital and physical objects have become cheaper, which means we no longer worry as much about physical theft. Online shopping has reduced inequalities in access to goods between major global cities and the rest of the world. In every aspect, automation brings us that eternally underestimated benefit of simply making our lives more convenient.
These improvements, both quantifiable and non-quantifiable, are huge. And in the 21st century, its very likely that even greater improvements will come soon. Today, ending aging and disease seems like a utopian concept. But from the perspective of computers in 1945, the modern era of embedding chips into almost everything was once utopian: even computers in science fiction movies were usually room-sized. If biotechnology advances in the next 75 years as much as computers have advanced in the past 75 years, the future may be even more impressive than almost anyone expects.
At the same time, some skepticism about progress often takes a darker direction. Even medical textbooks, such as this one from the 1990s (thanks to Emma Szewczak for finding it), sometimes make extreme claims that deny the value of two centuries of medical science and even argue that saving human life is not a Good thing:
The limits to growth theory is an idea put forward in the 1970s, arguing that growing population and industry will eventually exhaust the earths limited resources, leading to Chinas one-child policy and Indias large-scale forced sterilization. In earlier times, concerns about overpopulation were used to justify mass murder. And these ideas, which have been advanced since 1798, have been proven wrong over a long period of time.
It is for these reasons that I feel very uncomfortable with arguments that slow down technological or human progress as a starting point. Even sectoral slowdowns can be dangerous, given how closely interconnected the fields are. So when I write things like the ones I will say later in this post, moving away from a position of being open to progress in whatever form it takes, I say this with a heavy heart - however, in the 21st century So different and unique that these subtleties are worth considering.
Having said that, in terms of the broader question, especially when we move beyond is technology good overall and toward what specific technologies are good?, there is an important nuance to point out: environment .
The importance of environment and willingness to coordinate
Progress has been made in almost every aspect over the past hundred years, with the exception of climate change:
Even a pessimistic scenario for rising temperatures would fall far short of actual human extinction. But such a scenario could kill more people than a major war and could severely damage the health and livelihoods of people in areas already most troubled. A study by the Swiss Re Institute suggests that the worst-case climate change scenario could reduce the gross domestic product of the worlds poorest countries by up to 25%. The study also noted that life expectancy in rural India could be a decade lower than originally expected, while studies like this and this one suggest climate change could lead to 100 million more deaths by the end of the century.
These problems are very serious. The answer to why I am optimistic about our ability to overcome these challenges is two-fold. First, after decades of hyperbole and wishful thinking, solar is finally turning a corner, and enabling technologies like batteries are making similar progress. Second, we can look at humanity’s record in solving previous environmental problems. Take air pollution as an example. See the dystopia of the past: The London Smog in 1952 London.
What has happened since then? We asked Our World In Data again:
It turns out that 1952 wasnt even the peak of air pollution: in the late 19th century, there were even higher concentrations of air pollutants that were considered normal and acceptable. Since then, we have witnessed a century of sustained and rapid decline. I experienced this process firsthand when I visited China in 2014. High concentrations of smog in the air, estimated to shorten life spans by more than five years, were normal, but in 2020 the air often looked as clean as in many Western cities. This isnt our only success story. In many parts of the world, forest area is increasing. The acid rain crisis is improving. The ozone layer has been recovering for decades.
To me, the moral of this story is that, more often than not, version N of our civilizations technology does cause a problem, and version N+1 solves it. However, this does not happen automatically and requires conscious human effort. The ozone layer is recovering because we brought it back through international agreements like the Montreal Protocol. Air pollution is improving because we are making it better. Likewise, solar panels have made huge strides not because they are a destined part of the energy technology tree; solar panels have made huge strides because of decades of growing interest in their importance in addressing climate change. Awareness inspired engineers to work on solving the problem, and companies and governments funded their research. Solving these problems is achieved through conscious action, shaping the views of governments, scientists, philanthropists and businesses through public discourse and culture, rather than the unstoppable technological capital machine.
Artificial intelligence is fundamentally different from other technologies and requires special care
A lot of the dismissive view of AI I see comes from the perspective that it’s just “yet another technology”: the same type as social media, encryption, contraception, phones, airplanes, guns, printing, and the wheel. thing. These things obviously have a significant impact on society. They are more than isolated improvements to individual happiness: they fundamentally change culture, alter the balance of power, and harm those who were heavily dependent on the previous order. Many people are against them. Overall, the pessimists were invariably proven wrong.
But theres a different way of looking at artificial intelligence: its a new type of mind that is rapidly improving its intelligence, and that has serious chances of surpassing human intellectual capabilities and becoming the new apex species on Earth. There are far fewer things in that category: we might reasonably include humans surpassing monkeys, multicellular organisms surpassing single-celled organisms, the origins of life itself, and perhaps the Industrial Revolution, in which mechanical devices surpassed in physical strength Humanity. Suddenly, it feels like were traveling in far less familiar territory.
Whats important is that there are risks
One way in which things go wrong with AI could make the world worse is (almost) the worst possible way: it could literally lead to the extinction of the human race. This is an extreme statement: although the worst-case scenarios of climate change, man-made pandemics, or nuclear war may cause great harm, there are still many islands of civilization that will remain intact to pick up the wreckage. But if a super-intelligent AI decides to turn against us, its likely that there wont be any survivors left, thus ending humanity permanently. Even Mars may not be safe.
One reason for concern centers on tool convergence: for the very broad class of goals a superintelligent entity might have, two very natural intermediate steps an AI might take to better achieve those goals are (i) consuming resources and (ii) ensure their own safety. The Earth contains vast resources, and humans pose a predictable threat to the security of such an entity. We can try to give the AI a clear goal, which is to love and protect humans, but we dont know how to achieve that goal in a way that doesnt completely break down when the AI encounters something unexpected. So we are faced with a problem.
FVcWHxjVIAA 2 PylMIRI
Researcher Rob Bensinger tried to graphically illustrate different peoples estimates of the probability that an AI would either kill everyone or do something nearly equally bad. Many positions are rough approximations based on peoples public statements, but many others have publicly given their precise estimates; quite a few put the probability of annihilation above 25%.
A 2022 survey of machine learning researchers found that on average, researchers thought the probability that AI would actually kill us all was between 5-10%, roughly the same as your statistical expectation of dying from nonbiological causes, such as an injury. The chances are pretty good.
This is just a speculative hypothesis, and we should all be wary of speculative hypotheses that involve complex, multi-step stories. However, these arguments have withstood scrutiny for more than a decade and, therefore, seem worthy of at least mild concern. But even if youre not worried about actual extinction, there are other reasons to be scared.
Even if we survive, is a future of super-intelligent AI the world we want to live in?
Many modern science fiction novels depict dystopian scenarios and portray artificial intelligence in a poor light. Even when non-sci-fi works attempt to explore possible artificial intelligence futures, the answers are often quite unsatisfactory. So I asked around and asked about depictions of a future that includes superintelligent artificial intelligence, whether science fiction or otherwise, and whether we would want to live in it. The far most common answer is Iain Banks Civilization series.
The Civilization Series depicts a distant interstellar civilization, mainly composed of two characters: ordinary humans and super-intelligent artificial intelligence called psyches. Slightly Modified Humans: Although medical technology theoretically allows humans to live indefinitely, most choose to live only about 400 years, seemingly because they become bored with life by that point.
On the surface, life as a human seems good: comfortable, health problems are addressed, there are abundant entertainment options, and there is a positive and synergistic relationship between humans and the mind. However, a closer look reveals a problem: it seems that the mind is completely in control, and the only role of humans in the story is to serve as the pawns of the mind, performing tasks on their behalf.
Quoting from Gavin Leitch’s Against Civilization:
Even though the book seems to have human protagonists doing big, serious things, the humans are not the protagonists of the story, they are actually agents of the artificial intelligence. (Zakalwe is one of the only exceptions, as he can do immoral things that Minds are unwilling to do.) Minds in the Civilization series dont need humans, but humans need to be needed. (I think there are only a few. Humans need to be needed, or rather only a few of them are needed enough to give up many comforts. Most human life is not on this level. Still a good criticism.)
Projects undertaken by humans carry unreal risks. Machines can do almost anything better. What can you do? You can command the Mind not to catch you when youre climbing off a cliff; you can delete the minds backup to put yourself at real risk. You can also leave civilization and join some old-fashioned, illiberal strongly rated civilization. Another option is to spread the word about freedom by joining Contact.
I think even giving humans meaningful roles in the Civilization series is a stretch; I asked ChatGPT (who else?) why humans are given the roles they play, instead of Minds doing everything entirely themselves , I personally find its answer rather disappointing. In a world dominated by friendly super-intelligent artificial intelligence, it seems difficult to allow humans to serve as anything other than pets.
A world I don’t want to see.
Many other science fiction series depict a world where superintelligent artificial intelligence exists, but these artificial intelligences obey the orders of (unaugmented) biological human masters. Star Trek is a great example of a vision of harmony between a starship and its AI computer (and Data) and its human operator crew. However, this balance seems to be very unstable. The world of Star Trek may seem pleasant in the present, but its hard to imagine its vision of the relationship between artificial intelligence and humanity as anything other than a world in which starships are entirely computer-controlled, with no fuss about halls, artificial gravity, or climate control. ten-year transitional stage.
In such a situation, a human giving orders to a super-intelligent machine would make the human far less intelligent than the machine and possess less information. In a universe where there is competition to any degree, those civilizations in which humans play a secondary role will be superior to those in which humans stubbornly insist on control. Additionally, the computer itself may seize control. To understand why, imagine that you are legally a literal slave to an eight-year-old child. If you could have a long conversation with a child, do you think you could convince the child to sign a piece of paper that would set you free? Although I have not conducted this experiment, my intuitive answer is yes. So overall, humans becoming pets seems like a very hard-to-escape attractor, without having to worry about designing halls, artificial gravity, and climate control.
The sky is close and the emperor is far away
The Chinese proverb The sky is high and the emperor is far away expresses the basic fact about the limits of political centralization. Even in nominally large and autocratic empires—in practice, and especially in the larger autocratic empires—there are practical limits to the leaderships influence and attention, weakened by the need for the leadership to entrust local agents to enforce its will. of its ability to carry out its intentions, so there will always be a degree of practical freedom. Sometimes this can have detrimental effects: Absence from power can create space for local overlords to steal and oppress. But if central power goes in a bad direction, practical limits on attention and distance may put practical limits on how bad it can go.
In the age of artificial intelligence, this is no longer the case. In the 20th century, modern transportation technology made distance constraints less effective than before on central power; the large-scale totalitarian empires of the 1940s were in part a result of this. In the 21st century, scalable information collection and automation may mean that attention will no longer be a constraint. The complete disappearance of the natural limits to the existence of government could have dire consequences. The massive totalitarian empires of the 1940s were in part the result of this.
A decade into the rise of digital authoritarianism, surveillance technology has given authoritarian governments a powerful new tactic to suppress opposition: allowing protests to take place, then detecting and quietly taking action against participants after the fact. More generally, Im basically worried that the same type of management technology that enabled OpenAI to serve over 100 million customers with 500 employees will also enable a 500-person political elite or even a 5-person board of directors , maintaining an iron-fist rule over the entire country. With modern surveillance technology to gather information, and modern artificial intelligence to interpret it, there may be no hiding place anymore.
The situation gets even worse when we consider the consequences of AI in warfare. Translated the content of a semi-famous Sohu article in 2019. With modern surveillance technology to gather information, and modern artificial intelligence to interpret it, there may be no hiding place anymore.
"No need for political and ideological work and wartime mobilization"It mainly means that the supreme commander of the war only needs to consider the situation of the war itself, just like playing chess, and does not need to worry about what the knights and chariots on the chessboard are currently thinking. The war became a purely technological contest.
On a deeper level,"Political and ideological work and wartime mobilization"Anyone who starts a war must have a valid reason. The importance of having justification, a concept that has governed the legality of war in human societies for thousands of years, should not be underestimated. Anyone who wants to start a war must find at least a plausible reason or excuse. You might say this constraint is weak because, historically, it has often been little more than an excuse. For example, the real motivation behind the Crusades was plunder and territorial expansion, however, they were carried out in the name of God, even if the targets were the devout in Constantinople. However, even the weakest constraint is still a constraint! This mere excuse actually prevents the militants from completely letting their goals go unchecked. Even a man as evil as Hitler could not wage war without reservation; he had to spend years convincing the German people of the need for the aristocratic Aryan race to fight for their living space.
Today, people circulate as an important check on the power of dictators to wage war or oppress citizens at home. People in the loop prevented nuclear war, kept the Berlin Wall open, and saved lives during atrocities like the Holocaust. If armies were made up of robots, this constraint would disappear entirely. A dictator might be drunk at 10 p.m., mad at 11 p.m. because someone was unkind to them on Twitter, and then before midnight, a fleet of robot invasions might cross the border and wreak havoc on a neighboring countrys civilians and infrastructure.
Unlike previous eras, there were always some far-off corners, far away, where opponents of a regime could reorganize, hide, and ultimately find ways to improve things. However, with the development of artificial intelligence in the 21st century, a totalitarian regime may maintain enough blockade by monitoring and controlling the world enough to never change.
d/acc: Defense (or Dispersion, or Difference) Accelerationism
Over the past few months, the e/acc (effective accelerationist) movement has made great progress. Summed up by Beff Jezos here, e/acc is basically a recognition of the truly huge benefits of technological advancement, and a desire to accelerate this trend to reap those benefits earlier.
There are many situations where I find myself sympathetic to e/accs perspective. There is plenty of evidence that the FDA is too conservative in delaying or blocking drug approvals, and bioethics in general often seems to follow this principle: Its a tragedy that 20 people died in an experiment because of an error, but 200,000 people died because of the delay. When it comes to life-saving treatments, thats just one statistic. Delays in approving COVID tests and vaccines, as well as malaria vaccines, seem to reinforce this. However, this view may be taken too seriously.
In addition to my AI-related concerns, I feel particularly ambivalent about e/accs enthusiasm for military technology. In the current scenario of 2023, where this technology is made in the United States and immediately applied to defend Ukraine, it is easy to see how it can be a force for good. From a broader perspective, however, enthusiasm for modern military technology seems to require the belief that the dominant technological force will always be the good guy in most conflicts, now and in the future: Military technology is good because military technology is produced by the United States Built and controlled, and America is good. Is e/acc about to become an American supremacist? To bet all your chips on the present and future ethics of government and the future success of the country?
On the other hand, I see the need to think about new ways of reducing these risks. OpenAIs governance structure is a case in point: it appears to be a well-intentioned effort to balance the need to be profitable in order to satisfy the investors who provided the initial capital, while also looking to create a system of checks and balances to fend off those A move that could lead to OpenAI destroying the world. In practice, however, their recent attempt to fire Sam Altman makes this structure look like a complete failure: It concentrates power in an undemocratic, unaccountable five-person board of directors who do things based on secret information. make critical decisions and refuse to provide any details about their reasoning until threatened. Somehow, nonprofit boards behave so poorly that the companys employees create an impromptu de facto union to support the billionaire CEO instead of them.
Across the board, I see too many plans to save the world that involve giving extreme and opaque power to a tiny number of people and hoping they will use it wisely. So I find myself gravitating towards a different philosophy, one that has detailed ideas about how to deal with risk, but one that seeks to create and maintain a more democratic world and that tries to avoid centralization of power as the first solution to problems . This philosophy also involves areas far beyond AI, and I will use the name d/acc to refer to this philosophy.
The d here can stand for many things, especially defense, decentralization, democracy, and difference. First, think of it as a defense, and then we can see how this relates to other explanations.
A defensive world where health and democratic governance thrive
One way to think about the macro consequences of technology is to look at the balance of defense versus offense. Some technologies make it easier to attack others, broadly speaking: by doing things that go against their interests and make them feel the need to react. While other technologies make defense easier without even relying on large centralized actors.
A world that favors defense is a better world for many reasons. First, of course, are the direct benefits of security: fewer deaths, less economic value lost, less time wasted on conflicts. What is less appreciated, however, is that a more defensive world makes it easier for healthier, more open, and more freedom-respecting forms of governance to thrive.
One obvious example is Switzerland. Switzerland is often cited as the closest thing in the real world to a classical liberal governance utopia. Huge amounts of power are devolved to provinces (called states), major decisions are decided by referendums, and many locals dont even know who the president is. How does such a country survive under challenging political pressures? Partly due to brilliant political strategy, but partly due to its very defensive geography created by its mountainous terrain.
The flag is a big plus, but so are the mountains.
In James C. Scotts new book, The Art of Not Being Dominated, the anarchic societies of Zomia, much talked about, are another example: they have largely maintained their freedom and independence, largely thanks to in mountainous terrain. At the same time, the Eurasian steppe serves as a counterexample to a governance utopia. Sarah Penns discussion of maritime versus continental powers makes a similar point, although she focuses more on water as a defensive barrier than on mountains. Indeed, the combination of ease of voluntary trade and difficulty in forced invasion, common in Switzerland as well as in island nations, seemed to be ideal conditions for human flourishing.
When I was advising on Gitcoin Grants funding rounds within the Ethereum ecosystem, I discovered a related phenomenon. In Round 4, a clown farce broke out as some of the most lucrative beneficiaries were Twitter influencers, whose contributions were viewed as positive by some and negative by others. My interpretation of this is that there is an imbalance: secondary funding allows you to indicate that you believe something is a public good, but it does not provide a way to indicate that something is a public nuisance. In extreme cases, a completely neutral secondary funding system would fund both sides of the war. So, in round 5, I proposed that Gitcoin should include negative contributions: you pay $1 to reduce the amount of funding a project receives (and implicitly reallocate it to all other projects), and as a result a lot of people are dissatisfied Feeling dissatisfied with this.
One of the many memes that circulated after the fifth round.
This seems like a microscopic manifestation of a larger pattern to me: creating decentralized governance mechanisms to deal with negative externalities is a very difficult problem in society. The classic example of decentralized governance gone wrong is often crowd justice. Theres something about human psychology that makes dealing with negative things much trickier, and more likely to cause serious problems, than dealing with positive things. This is why, even in otherwise highly democratic organizations, the power to decide how to respond to negative issues is often left to a central committee.
In many cases, this dilemma is one of the underlying reasons why the concept of freedom is so valuable. If someone says something that offends you, or leads a lifestyle that you find offensive, the pain and disgust you feel are real, and you may even feel that being physically assaulted is not as bad as being exposed to these things. But trying to agree on socially acceptable offensive and obnoxious behavior may come with more costs and dangers than reminding ourselves that certain weirdos and assholes are the price we pay for living in a free society.
However, other times, a suck it up approach is impractical. In this case, another answer sometimes worth considering is defensive technology. The more secure the internet is, the less we need to invade peoples privacy and use shady international diplomacy to deal with individual hackers. The more we can build personalized tools for blocking people on Twitter, in-browser tools for detecting fraud, and collective tools for distinguishing rumor from truth, the less we have to argue about censorship. The faster we produce vaccines, the less we have to deal with super-spreaders. These solutions wont work in all areas - we certainly dont want to see a world where everyone has to wear real body armor - but in areas where we can leverage technology to make the world more defense-oriented, theres huge value in doing so.
This core idea, that some technologies are good for defense and worth promoting, while other technologies are good for offense and should be suppressed, has its roots in the effective altruism literature that appears under a different name: differential technology development. In 2022, researchers at the University of Oxford put this principle well:
Figure 1: Mechanisms through which differentiated technology development reduces negative social impacts.
There will inevitably be imperfections when classifying techniques as offensive, defensive or neutral. Like liberty, one can debate whether social democratic government policies reduce freedom by imposing heavy taxes and forcing employers, or whether they increase freedom by reducing ordinary peoples concerns about many kinds of risks. Like liberty, defense There are also technologies that may be at both ends of the spectrum. Nuclear weapons are good for offense, but nuclear power is good for human prosperity and is neutral between offense and defense. Different technologies may work differently over different time frames. But like “liberty” (or “equality” or “the rule of law”), the blurring around the edges is not an argument against a principle but an opportunity to better understand its subtleties.
Now, let’s see how this principle can be applied to a more comprehensive view of the world. We can think of defense technology like other technologies, divided into two realms: the world of atoms and the world of bits. The atomic world can be divided into the micro (i.e., biology and later nanotechnology) and the macro (i.e., what we traditionally think of as defense, but also includes resilient physical infrastructure). I split the bit world on different axes: in principle it is easy to agree, who is the attacker? Sometimes its easy; I call it cyber defense. Other times, its more difficult; I call it information defense.
macro physical defense
The most underrated defense technology in the macro realm isn’t even Iron Dome (including Ukraine’s new system) and other counter-tech and anti-missile military hardware, but resilient physical infrastructure. Most deaths in a nuclear war would likely come from supply chain disruptions rather than initial fallout and shock, and low-infrastructure internet solutions like Starlink have been critical to maintaining connectivity in Ukraine over the past year and a half.
Building tools to help people survive independently or semi-independently or even live comfortably in long-term international supply chains appears to be a valuable defensive technology and less risky for use offensively.
The task of trying to make humanity a multi-planetary civilization can also be viewed from a d/acc perspective: having at least some people able to live self-sufficiently on other planets could increase our resilience to horrific events on Earth. Even if the full vision currently seems unfeasible, it is likely that the forms of self-sufficient life that would need to be developed to realize this project could be used to help increase the resilience of our civilization on Earth.
Microphysical defense (also known as biological defense)
Covid remains a concern, particularly due to its long-term health effects. But Covid is far from the last pandemic we will face; many aspects of the modern world make it likely that more pandemics are coming:
Higher population density makes it easier for airborne viruses and other pathogens to spread. Epidemic diseases are relatively new in human history, with most beginning with urbanization thousands of years ago. Ongoing rapid urbanization means that population density will increase further over the next half century.
Increased air travel means airborne pathogens can spread rapidly around the world. People are rapidly getting richer meaning air travel is likely to increase significantly over the next half century; sophisticated modeling shows that even small increases could have serious consequences. Climate change may further increase this risk.
Animal domestication and factory farming are major risk factors. Measles may have evolved from a bovine virus less than 3,000 years ago. Today’s factory farming is also creating new strains of influenza viruses (as well as promoting antibiotic resistance, with implications for the human innate immune system).
Modern bioengineering makes it easier to create new, more virulent pathogens. Covid may or may not have leaked from a lab conducting intentional enhancement research. Regardless, lab leaks happen, and tools are rapidly improving, making it easier to intentionally create extremely deadly viruses, and even prions (zombie proteins). Artificial plagues are particularly worrisome, in part because, unlike nuclear weapons, they are not attributable: You can release a virus and no one can tell who created it. It is now possible to design a genetic sequence, send it to a wet lab for synthesis, and have it shipped to you within five days.
This is one of the areas where two organizations, CryptoRelief and Balvi, were founded and funded in 2021 by a large windfall of Shiba Inu coins. Initially focused on responding to the immediate crisis, CryptoRelief has more recently been building a long-term medical research ecosystem in India, while Balvi has been focused on forward-looking projects that improve our ability to detect, prevent and treat Covid and other airborne diseases. ++Balvi insists that projects funded by it must adopt an open source approach++. Drawing inspiration from the 19th-century hydraulic engineering movement that defeated cholera and other waterborne pathogens, it funds projects across the technology spectrum that could make the world more resilient to airborne pathogens by default, including:
Far-UVC radiation research and development
Air filtration and quality monitoring, and air quality monitoring in India, Sri Lanka, the United States, etc.
Cheap and effective decentralized air quality testing equipment
Research into long-term Covid causes and potential treatment options (the main cause may be simple, but clarifying the mechanisms and finding treatments is more difficult)
Vaccines (e.g. RaDVaC, PopVax) and vaccine injury studies
A new set of non-invasive medical tools
Leveraging open source data analytics for early detection of epidemics (e.g. EPIWATCH)
Tests, including very cheap molecular rapid tests
Biosafety masks for when other methods fail
Other promising areas of research include wastewater monitoring for pathogens, improving filtration and ventilation in buildings, and better understanding and mitigating risks caused by air pollution.
There is an opportunity to build a world that is more resilient to natural and artificial airborne epidemics by default. This world will have a highly optimized process where we can detect an epidemic automatically, starting from the beginning, and people around the world can have access to targeted, locally manufactured and verifiable open-source vaccines or other preventive measures within a month, through fog Chemical or nasal spray administration (i.e. self-administration when needed, no injection required). At the same time, better air quality will drastically reduce transmission rates and prevent many epidemics from being snuffed out in their infancy.
Envision a future that does not have to resort to the hammer of social coercion - no mandates and worse, no risky, poorly designed and implemented mandates that could make things worse - because public Health infrastructure is woven into the fabric of civilization. Such a world is possible, and with modest investments in biodefense, it can be achieved. Work would be smoother if these developments were open source, freely available to users, and protected as public goods.
Cyber defense, blockchain and cryptography
Security professionals generally agree that the current state of computer security is terrible. Still, it’s easy to underestimate the progress that has been made. Tens of billions of dollars in cryptocurrency can be stolen anonymously by anyone who can hack into a users wallet, and while the percentage stolen or stolen is far greater than I would like, the reality is that most of the cryptocurrency has not been hacked for over a decade. Steal. There have been some recent improvements:
A trusted hardware chip is built into the device, effectively creating a smaller, highly secure operating system within the users phone that remains secure even if the rest of the phone is hacked. These chips are increasingly being explored as a way to create more secure crypto wallets, part of a number of use cases.
The browser serves as the de facto operating system. Over the past decade, there has been a quiet shift from downloadable apps to browser-based apps. This is largely due to WebAssembly (WASM). Even Adobe Photoshop, long considered one of the main reasons many people couldnt actually use Linux due to its necessity and incompatibility with Linux, is now Linux-friendly by being embedded in the browser. This is also a big security advantage: while browsers do have their flaws, in general they offer more sandboxing than installed apps: apps cant access arbitrary files on your computer.
Harden the operating system. GrapheneOS for mobile exists and is very usable. A desktop version of QubesOS also exists; in my experience its currently slightly less usable than Graphene, but its improving.
Try to look beyond passwords. Passwords are difficult to protect because they are difficult to remember and can be easily eavesdropped. Lately, theres been a growing movement to reduce reliance on passwords and make hardware-based multi-factor authentication actually work.
However, a lack of cyber defense in other areas has also led to significant setbacks. The need to protect against spam has caused email to become very oligarchical in practice, making it difficult to self-host or create a new email provider. Many online applications, including Twitter, require users to log in to access content and block IP addresses using VPNs, making it more difficult to access the Internet in a privacy-preserving manner. The centralization of software also carries risks because of weaponized interdependencies: modern technologies tend to flow through centralized bottlenecks, and the operators of these bottlenecks use this power to collect information, manipulate results or exclude specific actors, which makes This tactic even appears to be being used against the blockchain industry itself.
These trends are worrisome because they threaten historically one of my major hopes for future prospects for freedom and privacy. In his book Future Imperfections, David Friedman predicts that we may be facing a eclectic future: the physical world will be increasingly surveilled, but through cryptography, the online world will Preserve or even improve their privacy. Unfortunately, as we have seen, such a countertrend is far from certain.
This is my emphasis on cryptographic technologies such as blockchain and zero-knowledge proofs. Blockchain allows us to create economic and social structures with shared hard drives without relying on centralized actors. Cryptocurrencies enable individuals to store and conduct financial transactions much like cash was used before the Internet, without having to rely on trusted third parties who can change the rules at will. They can also act as backup anti-Sybil mechanisms, making attacks and spam costly for users who do not or do not want to reveal their entitys identity. Account abstraction, especially social recovery wallets, can protect our crypto assets, and potentially other assets in the future, without over-reliance on centralized intermediaries.
Zero-knowledge proofs can be used for privacy, allowing users to prove facts about themselves without revealing private information. For example, wrapping a digital passport signature in a ZK-SNARK proves that you are the only citizen of a country without revealing which citizen you are. Similar technologies allow us to maintain the benefits of privacy and anonymity - features widely considered necessary for applications such as voting - while still gaining security guarantees and fighting against spam and bad actors.
The proposed design of the ZK social media system allows for moderation operations and the ability to punish users, all without knowing anyone’s identity.
Zupass is an excellent example of a practice incubated at Zuzalu. This is an app, already used by hundreds of people in Zuzalu and more recently by thousands of people in Devconnect for ticketing, that allows you to hold tickets, memberships, (non-transferable) digital collectibles and other certificates, and certify all matters relating to them without divulging privacy. For example, you can prove that you are the only registered resident of Zuzalu, or a ticket holder for Devconnect, without revealing other information about who you are. These proofs can be displayed in person or digitally via QR codes to log into apps such as Zupoll, an anonymous voting system open only to Zuzalu residents.
These technologies are excellent examples of the d/acc principle: they allow users and communities to verify trustworthiness without compromising privacy, and protect their security without relying on centralized choke points that impose Its own definition of who the good guys and bad guys are. They increase global accessibility by creating better and more just ways to protect the security of users or services than the common practice today of discriminating against entire countries deemed untrustworthy. These are very powerful primitives that may be necessary if we hope to preserve a decentralized vision of information security as we enter the 21st century. A broader commitment to defensive technologies in cyberspace can go a long way toward making the Internet more open, secure, and free in very important ways.
Info-defense is what I describe as cyber defense, dealing with situations where consensus among reasonable humans on the identity of the attacker is easy. If someone tries to hack into your wallet, it’s easy to assume that the hacker is the bad guy. If someone attempts to conduct a DoS attack on a website, it is easy for everyone to assume that they are malicious and morally different from the average user trying to read the content of the website. There are other cases where the lines are even more blurry. I call the tools for improving our defenses in these situations information defense.
Take fact-checking (aka preventing “misinformation”), for example. I really like Community Notes, it does a lot to help users identify truth and lies in other users tweets. Community Notes uses a new algorithm that displays not the most popular notes, but the ones endorsed by the most users across the political spectrum.
Practical applications of community annotations.
Im also a fan of prediction markets, which can reveal the significance of events in real time, before things are clear and before theres a consensus on where things are going. For example, the Polymarket on Sam Altman provides a very useful summary of the hour-by-hour revelation and negotiation of events, providing much needed context for those who only see individual news items without understanding the significance of each item.
Prediction markets are often flawed, but Twitter influencers who are willing to confidently state what they think will happen in the next year are often even more flawed. Prediction markets still have a lot of room for improvement. For example, prediction markets generally have low trading volumes on all but the most high-profile events; a natural direction to address this problem is to launch prediction markets involving artificial intelligence.
In the blockchain space, I think we need more of a specific type of information defense. That said, wallets should be more insightful and proactive in helping users determine the meaning of the things they sign, and protect them from fraud and scams. This is an intermediate case: what is and is not a scam is less subjective than opinions on controversial social events, but more subjective than distinguishing legitimate users from DoS attackers or hackers. Metamask already has a fraud database and automatically blocks users from accessing fraudulent websites.
Apps like Fire are an example of a deeper approach. However, security software shouldn’t be something that needs to be explicitly installed; it should be a default setting for crypto wallets or even browsers.
Because information defense is more subjective, it is more collective in nature than cyber defense: you need to somehow connect to a large and complex group of people to determine what information might be true or false, and with what applications It is a fraudulent Ponzi scheme. Developers have the opportunity to make greater progress in developing effective information defenses and to enhance existing forms of information defenses. Something like Community Notes can be included in the browser, covering not just social media platforms but the entire internet.
Social technology beyond the “defense” framework
To some extent, I might be accused of being too emphatic in characterizing some information technologies as defensive. After all, defense is about helping to protect well-intentioned actors from malicious actors (or, in some cases, from nature). However, some of these social technologies help well-intentioned actors develop consensus.
A good example of this is pol.is, which uses an algorithm similar to Community Notes (and predates Community Notes) to help communities identify points of consensus among different subgroups that disagree on many aspects . Viewpoints.xyz is inspired by pol.is and has a similar spirit:
Techniques like this can be used to enable more decentralized governance over contentious decisions. Again, the blockchain community is a venue for good practice in this area and has demonstrated the value of these algorithms. Generally, decisions about what improvements (EIPs) to make to the Ethereum protocol are made by a fairly small group in meetings called All Core Devs calls. This works quite well for decisions that are more technical and that most community members dont feel strongly about. For more important decisions involving protocol economics, or more fundamental values like immutability and censorship resistance, this is often not enough. Looking back at 2016-17, when implementing a series of controversial decisions such as the DAO hard fork, reducing the issuance, and (not) unfreezing the Parity wallet, tools such as Carbonvote and social media voting helped the community and developers understand the public opinion of the community. direction.
Carbonvote on the DAO fork.
Carbonvote has its flaws: it relies on ETH holdings to determine who is a member of the Ethereum community, leaving the results dominated by a small number of wealthy ETH holders (whales). However, with modern tools we can make a better Carbonvote that leverages multiple signals such as POAPs, Zupass stamps, Gitcoin passports, Protocol Guild memberships, and ETH (even individually staked ETH) holdings to measure community membership qualifications.
Such a tool could be used by any community to make higher quality decisions, find common ground, coordinate migrations (physical or digital), or perform other operations without relying on opaque central leadership. This is not defense acceleration per se, but it could be called democratic acceleration. Such tools could even be used to improve and democratize the governance of key actors and institutions working in the field of AI.
So what is the path forward for superintelligence?
Now, it all looks good and could make the world a more harmonious, safer, and freer place in the coming century. However, this doesn’t even address the elephant in the room: super-intelligent artificial intelligence.
The path forward tacitly proposed by many who worry about AI essentially leads to a minimal AI world government. Recent versions include proposals for a transnational AGI consortium ("MAGIC"). If such a coalition were formed and succeeded in its goal of creating a superintelligent AI, it would naturally become a de facto minimal world government. Longer term, theres also the idea of a critical action theory: we create an AI that performs a single action once, reorganizing the world into a world from which humans still have control, but where the game board is somehow more A situation conducive to defense and human flourishing.
The main practical problem I see at the moment is that people dont really seem to trust any specific governance mechanism to have the authority to build such a thing. When you look at the results of my recent poll on Twitter, where people were asked whether they wanted to see AI monopolized by a single entity that started a decade early, or have AI for everyone delayed by a decade, the fact that Its very obvious:
While the size of each poll is small, what it lacks in size is made up for by the consistent results of these polls across a wide range of information sources and options. In nine out of nine votes, a majority would rather see highly advanced artificial intelligence simply delayed by a decade rather than monopolized by a single group, whether a company, government or multinational agency. In seven of the nine times, deferral won by at least a two-to-one margin. This seems like an important fact that anyone pursuing AI regulation should understand. Current approaches have been to create licensing schemes and regulatory requirements that try to limit the development of AI to fewer hands, but these approaches have been met with widespread opposition as people don’t want to see anyone monopolize something so powerful. . Even if such top-down regulatory proposals reduce the risk of extinction, they increase the chance of a permanent slide into centralized totalitarianism. Paradoxically, an agreement to outright ban extremely advanced AI research (perhaps with exceptions for biomedical AI), coupled with measures to force open source for those models that are not banned, serves as a way to reduce profit incentives while further Will approaches that improve equality of access be more popular?
The main approach preferred by opponents of the approach of having a global body deal with AI and ensure its governance is very good is polytheistic AI: intentionally ensuring that there are many people and companies developing large amounts of AI so that No one becomes more powerful than the others. This way, the theory goes, even if AI becomes super-intelligent, we can still maintain the balance of power.
This philosophy is interesting, but my experience trying to ensure polytheism within the Ethereum ecosystem makes me worry that this is an inherently unstable equilibrium. In Ethereum, we intentionally try to ensure decentralization in many ways: ensuring that no one codebase controls more than half of the proof-of-stake network, trying to resist the dominance of large stake pools, improving geographic decentralization, etc. Basically, Ethereum is actually trying to execute the old liberal dream of a market-based society that uses social pressure rather than government as the antitrust regulator. To some extent, this has paid off: Prysm client dominance has dropped from over 70% to under 45%. But this is not some automatic market process: it is the result of human will and coordinated action.
My experience with Ethereum is consistent with learnings across the world, where many markets have proven to be natural monopolies. As superintelligent AI acts independently of humans, the situation becomes even more precarious. The most powerful AI may quickly get ahead due to recursive self-improvement, and once an AI becomes more powerful than humans, there will be no power to rebalance things.
Furthermore, even if we do get a stable polytheistic world with superintelligent AI, we still face another problem: we get a universe where humans become pets.
A Path to Happiness: Merging with Artificial Intelligence?
Another option Ive been hearing about lately is to think of AI less as something separate from humans, and focus more on tools that augment human cognition rather than replace it.
A recent example is AI drawing tools. Today, the most prominent tools for making AI-generated images require human input for only one step, after which the AI takes over completely. Another option is to focus more on an AI version of something like Photoshop: the artist or the AI might make an early draft of an image, and then the two collaborate to improve it through a process of real-time feedback.
source, 2023 Photoshop-generated AI fills. I tried it and it took some getting used to, but it actually works great!
Another direction in a similar spirit is open agency architecture, which proposes to separate different parts of the AI mind (e.g. making plans, executing plans, interpreting information from the external world) into separate components and introducing in between these parts Diverse human feedback.
So far, this sounds mundane, and almost everyone can agree that it would be a good thing. Economist Daron Acemoglus work is far from this type of AI futurism, but his new book, Power and Progress, hints at a desire to see more of these types of artificial intelligence.
However, if we want to take the idea of cooperation between artificial intelligence and humans further, we will come to more radical conclusions. Unless we create a world government powerful enough to detect and stop every small group that hacks using individual GPUs on laptops, eventually someone will create a super-intelligent AI - one that can think a thousand times faster than us AI - and no group of humans with the tools at their disposal can compete. So we need to think deeper and further about this idea of human-machine collaboration.
The first natural step is a brain-computer interface. Brain-computer interfaces could give humans more direct access to increasingly powerful forms of computing and cognition, shortening the two-way communication loop between humans and machines from seconds to milliseconds. It would also significantly reduce the mental effort cost of getting a computer to help you gather facts, provide advice, or execute a plan.
The later stages of this roadmap get really weird. In addition to brain-computer interfaces, there are various ways to directly improve our brains through biological innovations. The final step in merging these two routes may involve uploading our minds to computers to run directly. This will also become the ultimate solution to physical security: protecting yourself from harm will no longer be a complex matter of protecting the inevitably soft human body, but a simpler matter of making a backup of your data.
Such directions sometimes cause concern, partly because they are irreversible and partly because they may give powerful people an advantage over the rest of us. There are dangers with brain-computer interfaces in particular—after all, were talking about reading and writing peoples thoughts directly. It is because of these concerns that I believe it is best for a security-focused open source movement to take the leading role on this path, rather than closed proprietary companies and venture capital funds. Furthermore, all of these problems are more severe in the case of superintelligent AI that operates independently of humans than in augmented parts that are in close contact with humans. Due to the usage restrictions of ChatGPT, today there is already a split between enhanced and non-enhanced.
If we want a future that is both superintelligent and human, a future where humans are not just pets but actually retain meaningful agency in the world, then an option like this seems the most natural. There are many arguments in favor of this approach: by involving human feedback at every step of the decision-making process, we reduce the incentive to shift high-level planning responsibilities to the AI itself, thereby reducing the need for the AI to perform something entirely different independently of human values. possibility of compliance.
Another argument in favor of this direction is that rather than just shouting PAT, it would be better to provide a corresponding message that offers alternative paths forward. This will require a philosophical shift from the current mindset that technological advances that touch humans are dangerous, but advances that have nothing to do with humans are essentially safe. But it has a huge adversarial benefit: it gives developers something to do. Today, the AI safety movements main message to AI developers seems to be you should stop. Alignment studies can be conducted, but today there is a lack of financial incentives for this. In contrast, the common e/acc message of You are already a hero, just the way you are is clearly very appealing. A d/acc message that “you should build, and build things that help you and humanity flourish, but be more selective and intentional in making sure you build things that help flourish,” could be a winner.
Does d/acc fit with your existing philosophy?
If you are e/acc, then d/acc is a subspecies of e/acc, just more selective and conscious.
If you are an effective altruist, d/acc is a repackaging of the idea of effective altruism, albeit with a greater emphasis on liberal and democratic values.
If you are a libertarian, then d/acc is a subspecies of techno-liberalism, albeit more pragmatic, more critical of the technological capital machine, and willing to accept government intervention, at least today (at least, if the culture if intervention is ineffective) to prevent an even worse illiberal future.
If you are a pluralist, as Glen Weir defines it, then d/acc is a framework that can easily accommodate the better democratic coordination techniques that pluralism emphasizes.
If you are a public health advocate, the ideas of d/acc can provide inspiration for a broader long-term vision and provide you with the opportunity to find common ground with techies with whom you may otherwise disagree.
If you are a blockchain advocate, d/acc is a more modern, broader narrative than the emphasis on inflation and banks fifteen years ago, putting blockchain into the context of a concrete strategy to move toward more Bright future development.
If youre a solarpunk, d/acc is a subspecies of solarpunk and contains a similar emphasis on conscious and collective action.
If youre a moonpunk, then youll appreciate d/accs emphasis on information defense, by maintaining privacy and freedom.
We are the brightest star
I love technology because it expands human potential. Ten thousand years ago, we could only make some simple hand tools, change the growth of plants on a small piece of land, and build basic houses. Today, we can build 800-meter-high towers, store the entire content of human knowledge in the devices in our hands, communicate instantly around the world, extend our lifespan, live a happy and fulfilling life, and no longer worry about our last destiny. Good friends often die from illness.
We started at the bottom and now we are here.
I believe these things are very beneficial, and it is also very beneficial to further extend humanitys influence to the planet and Odaily, because I believe humans are very good. In some circles, its fashionable to be skeptical: movements advocating voluntary human extinction believe the Earth would be better off without humans, and many others hope there will be even fewer humans in the coming centuries. People often argue that humans are bad because we cheat and steal, engage in colonialism and war, and abuse and exterminate other species. My response to this way of thinking is a simple question: compared to what?
Yes, humans often act mean, but more often we act kind and gracious and work together for our common good. Even during wars, we generally take care to protect civilians - not nearly enough, of course, but more than we did 2,000 years ago. There may be widespread availability of non-animal-based meats in the next century, eliminating the biggest moral disaster that humans can rightfully be accused of today. Non-human animals don’t do this. There is no situation in which a cat adopts an entire lifestyle that rejects eating mice as an ethical principle. The sun is getting brighter every year, and in about a billion years this is expected to make Earth too hot to support life. Will the sun consider the genocide it is about to cause?
Therefore, I firmly believe that we, humans, are the brightest stars in the universe that we know and have seen. We are the only thing we know, even if imperfectly, to sometimes make a sincere effort to care about the good and to adjust our behavior to better serve it. Two billion years from now, if the earth or any part of the universe still possesses the beauty of life on earth, it will be a work of art for humanity, like space travel