text
stringlengths 0
1.43k
|
---|
Once a model is revealed, users could check the hash to verify that the correct model was released, and the community could run tests on the model to verify its fairness. The publication delay would ensure that by the time the model is revealed, it is already outdated. So compared to the centralized world, the question is not if we can do better, but by how much . For the decentralized world , however, it is important to be careful: if someone builds eg. a prediction market or a stablecoin that uses an AI oracle, and it turns out that the oracle is attackable, that's a huge amount of money that could disappear in an instant . AI as the objective of the game If the above techniques for creating a scalable decentralized private AI, whose contents are a black box not known by anyone, can actually work, then this could also be used to create AIs with utility going beyond blockchains. |
The NEAR protocol team is making this a core objective of their ongoing work. There are two reasons to do this: If you can make \" trustworthy black-box AIs \" by running the training and inference process using some combination of blockchains and MPC, then lots of applications where users are worried about the system being biased or cheating them could benefit from it. Many people have expressed a desire for democratic governance of systemically-important AIs that we will depend on; cryptographic and blockchain-based techniques could be a path toward doing that. From an AI safety perspective, this would be a technique to create a decentralized AI that also has a natural kill switch, and which could limit queries that seek to use the AI for malicious behavior. It is also worth noting that \"using crypto incentives to incentivize making better AI\" can be done without also going down the full rabbit hole of using cryptography to completely encrypt it: approaches like BitTensor fall into this category. |
Now that both blockchains and AIs are becoming more powerful, there is a growing number of use cases in the intersection of the two areas. However, some of these use cases make much more sense and are much more robust than others. In general, use cases where the underlying mechanism continues to be designed roughly as before, but the individual players become AIs, allowing the mechanism to effectively operate at a much more micro scale, are the most immediately promising and the easiest to get right. The most challenging to get right are applications that attempt to use blockchains and cryptographic techniques to create a \"singleton\": a single decentralized trusted AI that some application would rely on for some purpose. These applications have promise, both for functionality and for improving AI safety in a way that avoids the centralization risks associated with more mainstream approaches to that problem. But there are also many ways in which the underlying assumptions could fail; hence, it is worth treading carefully, especially when deploying these applications in high-value and high-risk contexts. I look forward to seeing more attempts at constructive use cases of AI in all of these areas, so we can see which of them are truly viable at scale. |
One of my favorite memories from ten years ago was taking a pilgrimage to a part of Berlin that was called the Bitcoin Kiez: a region in Kreuzberg where there were around a dozen shops within a few hundred meters of each other that were all accepting Bitcoin for payments. The centerpiece of this community was Room 77, a restaurant and bar run by Joerg Platzer. In addition to simply accepting Bitcoin, it also served as a community center, and all kinds of open source developers, political activists of various affiliations, and other characters would frequently come by. A similar memory from two months earlier was PorcFest (that's \"porc\" as in \"porcupine\" as in \"don't tread on me\"), a libertarian gathering in the forests of northern New Hampshire, where the main way to get food was from small popup restaurants with names like \"Revolution Coffee\" and \"Seditious Soups, Salads and Smoothies\", which of course accepted Bitcoin. Here too, discussing the deeper political meaning of Bitcoin, and using it in daily life, happened together side by side. |
The reason why I bring these memories up is that they remind me of a deeper vision underlying crypto: we are not here to just create isolated tools and games, but rather build holistically toward a more free and open society and economy, where the different parts - technological, social and economic - fit into each other. |
The early vision of \"web3\" was also a vision of this type, going in a similarly idealistic but somewhat different direction. The term \"web3\" was originally coined by Ethereum cofounder Gavin Wood, and it refers to a different way of thinking about what Ethereum is: rather than seeing it, as I initially did, as \"Bitcoin plus smart contracts\", Gavin thought about it more broadly as one of a set of technologies that could together form the base layer of a more open internet stack. |
When the free open source software movement began in the 1980s and 1990s, the software was simple: it ran on your computer and read and wrote to files that stayed on your computer. But today, most of our important work is collaborative, often on a large scale. And so today, even if the underlying code of an application is open and free, your data gets routed through a centralized server run by a corporation that could arbitrarily read your data, change the rules on you or deplatform you at any time. And so if we want to extend the spirit of open source software to the world of today, we need programs to have access to a shared hard drive to store things that multiple people need to modify and access. And what is Ethereum, together with sister technologies like peer-to-peer messaging (then Whisper, now Waku) and decentralized file storage (then just Swarm, now also IPFS)? A public decentralized shared hard drive. This is the original vision from which the now-ubiquitous term \"web3\" was born. |
Unfortunately, since 2017 or so, these visions have faded somewhat into the background. Few talk about consumer crypto payments, the only non-financial application that is actually being used at a large scale on-chain is ENS, and there is a large ideological rift where significant parts of the non-blockchain decentralization community see the crypto world as a distraction, and not as a kindred spirit and a powerful ally. In many countries, people do use cryptocurrency to send and save money, but they often do this through centralized means: either through internal transfers on centralized exchange accounts, or by trading USDT on Tron. |
Having lived through that era, the number one culprit that I would blame as the root cause of this shift is the rise in transaction fees. When the cost of writing to the chain is $0.001, or even $0.1, you could imagine people making all kinds of applications that use blockchains in various ways, including non-financial ways. But when transaction fees go to over $100, as they have during the peak of the bull markets, there is exactly one audience that remains willing to play - and in fact, because coin prices are going up and they're getting richer, becomes even more willing to play: degen gamblers. Degen gamblers can be okay in moderate doses, and I have talked to plenty of people at events who were motivated to join crypto for the money but stayed for the ideals. But when they are the largest group using the chain on a large scale, this adjusts the public perception and the crypto space's internal culture, and leads to many of the other negatives that we have seen play out over the last few years. |
Now, fast forward to 2023. On both the core challenge of scaling, and on various \"side quests\" of crucial importance to building a cypherpunk future actually viable, we actually have a lot of positive news These two things: the growing awareness that unchecked centralization and over-financialization cannot be what \"crypto is about\", and the key technologies mentioned above that are finally coming to fruition, together present us with an opportunity to take things in a different direction. Namely, to make at least a part of the Ethereum ecosystem actually be the permissionless, decentralized, censorship resistant, open source ecosystem that we originally came to build. These two things: the growing awareness that unchecked centralization and over-financialization cannot be what \"crypto is about\", and the key technologies mentioned above that are finally coming to fruition, together present us with an opportunity to take things in a different direction. Namely, to make at least a part of the Ethereum ecosystem actually be the permissionless, decentralized, censorship resistant, open source ecosystem that we originally came to build.Many of these values are shared not just by many in the Ethereum community, but also by other blockchain communities, and even non-blockchain decentralization communities, though each community has its own unique combination of these values and how much each one is emphasized. |
Open global participation: anyone in the world should be able to participate as a user, observer or developer, on a maximally equal footing. Participation should be permissionless. |
Decentralization: minimize the dependence of an application on any one single actor. In particular, an application should continue working even if its core developers disappear forever. Censorship resistance: centralized actors should not have the power to interfere with any given user's or application's ability to operate. Concerns around bad actors should be addressed at higher layers of the stack. Auditability: anyone should be able to validate an application's logic and its ongoing operation (eg. by running a full node) to make sure that it is operating according to the rules that its developers claim it is. Credible neutrality: base-layer infrastructure should be neutral, and in such a way that anyone can see that it is neutral even if they do not already trust the developers. Building tools, not empires. Empires try to capture and trap the user inside a walled garden; tools do their task but otherwise interoperate with a wider open ecosystem. Cooperative mindset: even while competing, projects within the ecosystem cooperate on shared software libraries, research, security, community building and other areas that are commonly valuable to them. Projects try to be positive-sum, both with each other and with the wider world. |
It is very possible to build things within the crypto ecosystem that do not follow these values. One can build a system that one calls a \"layer 2\", but which is actually a highly centralized system secured by a multisig, with no plans to ever switch to something more secure. One can build an account abstraction system that tries to be \"simpler\" than ERC-4337, but at the cost of introducing trust assumptions that end up removing the possibility of a public mempool and make it much harder for new builders to join. One could build an NFT ecosystem where the contents of the NFT are needlessly stored on centralized websites, making it needlessly more fragile than if those components are stored on IPFS. One could build a staking interface that needlessly funnels users toward the already-largest staking pool. Resisting these pressures is hard, but if we do not do so, then we risk losing the unique value of the crypto ecosystem, and recreating a clone of the existing web2 ecosystem with extra inefficiencies and extra steps. |
The crypto space is in many ways an unforgiving environment. A 2021 article by Dan Robinson and Georgios Konstantiopoulos expresses this vividly in the context of MEV, arguing that Ethereum is a dark forest where on-chain traders are constantly vulnerable to getting exploited by front-running bots, those bots themselves are vulnerable to getting counter-exploited by other bots, etc. This is also true in other ways: smart contracts regularly get hacked, users' wallets regularly get hacked, centralized exchanges fail even more spectacularly, etc. This is a big challenge for users of the space, but it also presents an opportunity: it means that we have a space to actually experiment with, incubate and receive rapid live feedback on all kinds of security technologies to address these challenges. |
We have seen successful responses to challenges in various contexts already: Problem #1: Centralized exchages getting hacked. Solution #1: Use DEXes plus stablecoins, so centralized entities only need to be trusted to handle fiat. Problem #2: Individual private keys are not secure. Solution #2: Smart contract wallets: multisig, social recovery, etc. Problem #3:Users getting tricked into signing transactions that drain their money. Solution #3:Wallets like Rabby showing their users results of transaction simulation. Problem #4: Users getting sandwich-attacked by MEV players. Solution #4:Cowswap, Flashbots Protect, MEV Blocke |
Everyone wants the internet to be safe. Some attempt to make the internet safe by pushing approaches that force reliance on a single particular actor, whether a corporation or a government, that can act as a centralized anchor of safety and truth. But these approaches sacrifice openness and freedom, and contribute to the tragedy that is the growing \"splinternet\". People in the crypto space highly value openness and freedom. The level of risks and the high financial stakes involved mean that the crypto space cannot ignore safety, but various ideological and structural reasons ensure that centralized approaches for achieving safety are not available to it. At the same time, the crypto space is at the frontier of very powerful technologies like zero knowledge proofs, formal verification, hardware-based key security and on-chain social graphs. These facts together mean that, for crypto, the open way to improving security is the only way. |
All of this is to say, the crypto world is a perfect testbed environment to take its open and decentralized approach to security and actually apply it in a realistic high-stakes environment, and mature it to the point where parts of it can then be applied in the broader world. This is one of my visions for how the idealistic parts of the crypto world and the chaotic parts of the crypto world, and then the crypto world as a whole and the broader mainstream, can turn their differences into a symbiosis rather than a constant and ongoing tension. |
In 2014, Gavin Wood introduced Ethereum as one of a suite of tools that can be built, the other two being Whisper (decentralized messaging) and Swarm (decentralized storage). The former was heavily emphasized, but with the turn toward financialization around 2017 the latter were unfortunately given much less love and attention. That said, Whisper continues to exist as Waku, and is being actively used by projects like the decentralized messenger Status. Swarm continues to be developed, and now we also have IPFS, which is used to host and serve this blog. |
In the last couple of years, with the rise of decentralized social media (Lens, Farcaster, etc), we have an opportunity to revisit some of these tools. In addition, we also have another very powerful new tool to add to the trifecta: zero knowledge proofs. These technologies are most widely adopted as ways of improving Ethereum's scalability, as ZK rollups, but they are also very useful for privacy. In particular, the programmability of zero knowlege proofs means that we can get past the false binary of \"anonymous but risky\" vs \"KYC'd therefore safe\", and get privacy and many kinds of authentication and verification at the same time. |
An example of this in 2023 was Zupass. Zupass is a zero-knowledge-proof-based system that was incubated at Zuzalu, which was used both for in-person authentication to events, and for online authentication to the polling system Zupoll, the Twitter-lookalike Zucast and others. |
The key feature of Zupass was this: you can prove that you are a resident of Zuzalu, without revealing which member of Zuzalu you are. Furthermore, each Zuzalu resident could only have one randomly-generated cryptographic identity for each application instance (eg. a poll) that they were signing into. Zupass was highly successful, and was applied later in the year to do ticketing at Devconnect. |
The most practical use of Zupass so far has probably been the polling. All kinds of polls have been made, some on politically controversial or highly personal topics where people feel a strong need to preserve their privacy, using Zupass as an anonymous voting platform. |
Here, we can start to see the contours of what an Ethereum-y cypherpunk world would look like, at least on a pure technical level. We can be holding our assets in ETH and ERC20 tokens, as well as all kinds of NFTs, and use privacy systems based on stealth addresses and Privacy Pools technology to preserve our privacy while at the same time locking out known bad actors' ability to benefit from the same anonymity set. Whether within our DAOs, or to help decide on changes to the Ethereum protocol, or for any other objective, we can use zero-knowledge voting systems, which can use all kinds of credentials to help identify who has standing to vote and who does not: in addition to voting-with-tokens as done in 2017, we can have anonymous polls of people who have made sufficient contributions to the ecosystem, people who have attended enough events, or one-vote-per-person. |
In-person and online payments can happen with ultra-cheap transactions on L2s, which take advantage of data availability space (or off-chain data secured with Plasma) together with data compression to give their users ultra-high scalability. Payments from one rollup to another can happen with decentralized protocols like UniswapX. Decentralized social media projects can use various storage layers to store activity such as posts, retweets and likes, and use ENS (cheap on L2 with CCIP) for usernames. We can have seamless integration between on-chain tokens, and off-chain attestations held personally and ZK-proven through systems like Zupass. |
Mechanisms like quadratic voting, cross-tribal consensus finding and prediction markets can be used to help organizations and communities govern themselves and stay informed, and blockchain and ZK-proof-based identities can make these systems secure against both centralized censorship from the inside and coordinated manipulation from the outside. Sophisticated wallets can protect people as they participate in dapps, and user interfaces can be published to IPFS and accessed as .eth domains, with hashes of the HTML, javascript and all software dependencies updated directly on-chain through a DAO. |
Smart contract wallets, born to help people not lose tens of millions of dollars of their cryptocurrency, would expand to guard people's \"identity roots\", creating a system that is even more secure than centralized identity providers like \"sign in with Google\". Traditional stack:Banking system, Receipts,DNS (.com, .io, etc), Regular email, Regular messaging (eg. Telegram), Sign in with Google, Twitter, Wechat, Publishing blogs on Medium, etc, Twitter, Facebook, Limit bad actors through all-seeing big brother Decentralized stack: ETH, stablecoins, L2s for payments, DEXes (note: still need banks for loans), Links to transactions on block explorers, Corporations, DAOs, ENS (.eth), Encrypted email (eg. Skiff), Decentralized messaging (eg. Status),Sign in with Ethereum, Zupass, Attestations via EAS, POAPs, Zu-Stamps... + social recovery, Publishing self-hosted blogs on IPFS (eg. using Fleek), Lens, Farcaster...,Constrain bad actors through zero knowledge proofs |
One of the benefits of thinking about it as a stack is that this fits well with Ethereum's pluralist ethos. Bitcoin is trying to solve one problem, or at most two or three. Ethereum, on the other hand, has lots of sub-communities with lots of different focuses. There is no single dominant narrative. The goal of the stack is to enable this pluralism, but at the same time strive for growing interoperability across this plurality. |
It's easy to say \"these people doing X are a corrupting influence and bad, these people doing Y are the real deal\". But this is a lazy response. To truly succeed, we need not only a vision for a technical stack, but also the social parts of the stack that make the technical stack possible to build in the first place. The advantage of the Ethereum community, in principle, is that we take incentives seriously. PGP wanted to put cryptographic keys into everyone's hands so we can actually do signed and encrypted email for decades, it largely failed, but then we got cryptocurrency and suddenly millions of people have keys publicly associated to them, and we can start using those keys for other purposes - including going full circle back to encrypted email and messaging. Non-blockchain decentralization projects are often chronically underfunded, blockchain-based projects get a 50-million dollar series B round. It is not from the benevolence of the staker that we get people to put in their ETH to protect the Ethereum network, but rather from their regard to their own self-interest - and we get $20 billion in economic security as a result. |
At the same time, incentives are not enough. Defi projects often start humble, cooperative and maximally open source, but sometimes begin to abandon these ideals as they grow in size. We can incentivize stakers to come and participate with very high uptime, but is much more difficult to incentivize stakers to be decentralized. It may not be doable using purely in-protocol means at all. Lots of critical pieces of the \"decentralized stack\" described above do not have viable business models. The Ethereum protocol's governance itself is notably non-financialized - and this has made it much more robust than other ecosystems whose governance is more financialized. This is why it's valuable for Ethereum to have a strong social layer, which vigorously enforces its values in those places where pure incentives can't - but without creating a notion of \"Ethereum alignment\" that turns into a new form of political correctness. |
There is a balance between these two sides to be made, though the right term is not so much balance as it is integration. There are plenty of people whose first introduction to the crypto space is the desire to get rich, but who then get acquainted with the ecosystem and become avid believers in the quest to build a more open and decentralized world. How do we actually make this integration happen? This is the key question, and I suspect the answer lies not in one magic bullet, but in a collection of techniques that will be arrived at iteratively. The Ethereum ecosystem is already more successful than most in encouraging a cooperative mentality between layer 2 projects purely through social means. Large-scale public goods funding, especially Gitcoin Grants and Optimism's RetroPGF rounds, is also extremely helpful, because it creates an alternative revenue channel for developers that don't see any conventional business models that do not require sacrificing on their values. But even these tools are still in their infancy, and there is a long way to go to both improve these particular tools, and to identify and grow other tools that might be a better fit for specific problems. |
This is where I see the unique value proposition of Ethereum's social layer. There is a unique halfway-house mix of valuing incentives, but also not getting consumed by them. There is a unqiue mix of valuing a warm and cohesive community, but at the same time remembering that what feels \"warm and cohesive\" from the inside can easily feel \"oppressive and exclusive\" from the outside, and valuing hard norms of neutrality, open source and censorship resistance as a way of guarding against the risks of going too far in being community-driven. If this mix can be made to work well, it will in turn be in the best possible position to realize its vision on the economic and technical level. |
Last month, Marc Andreessen published his \"techno-optimist manifesto\", arguing for a renewed enthusiasm about technology, and for markets and capitalism as a means of building that technology and propelling humanity toward a much brighter future. The manifesto unambiguously rejects what it describes as an ideology of stagnation, that fears advancements and prioritizes preserving the world as it exists today. This manifesto has received a lot of attention, including response articles from Noah Smith, Robin Hanson, Joshua Gans (more positive), and Dave Karpf, Luca Ropek, Ezra Klein (more negative) and many others. Not connected to this manifesto, but along similar themes, are James Pethokoukis's \"The Conservative Futurist\" and Palladium's \"It's Time To Build for Good\". This month, we saw a similar debate enacted through the OpenAI dispute, which involved many discussions centering around the dangers of superintelligent AI and the possibility that OpenAI is moving too fast. |
My own feelings about techno-optimism are warm, but nuanced. I believe in a future that is vastly brighter than the present thanks to radically transformative technology, and I believe in humans and humanity. I reject the mentality that the best we should try to do is to keep the world roughly the same as today but with less greed and more public healthcare. However, I think that not just magnitude but also direction matters. There are certain types of technology that much more reliably make the world better than other types of technology. There are certain types of technlogy that could, if developed, mitigate the negative impacts of other types of technology. The world over-indexes on some directions of tech development, and under-indexes on others. We need active human intention to choose the directions that we want, as the formula of \"maximize profit\" will not arrive at them automatically. |
In this post, I will talk about what techno-optimism means to me. This includes the broader worldview that motivates my work on certain types of blockchain and cryptography applications and social technology, as well as other areas of science in which I have expressed an interest. But perspectives on this broader question also have implications for AI, and for many other fields. Our rapid advances in technology are likely going to be the most important social issue in the twenty first century, and so it's important to think about them carefully. |
In some circles, it is common to downplay the benefits of technology, and see it primarily as a source of dystopia and risk. For the last half century, this often stemmed either from environmental concerns, or from concerns that the benefits will accrue only to the rich, who will entrench their power over the poor. More recently, I have also started to see libertarians becoming worried about some technologies, out of fear that the tech will lead to centralization of power. This month, I did some polls asking the following question: if a technology had to be restricted, because it was too dangerous to be set free for anyone to use, would they prefer it be monopolized or delayed by ten years? I was surpised to see, across three platforms and three choices for who the monopolist would be, a uniform overwhelming vote for a delay. And so at times I worry that we have overcorrected, and many people miss the opposite side of the argument: that the benefits of technology are really friggin massive, on those axes where we can measure if the good massively outshines the bad, and the costs of even a decade of delay are incredibly high. |
To give one concrete example, let's look at a life expectancy chart. What do we see? Over the last century, truly massive progress. This is true across the entire world, both the historically wealthy and dominant regions and the poor and exploited regions. Some blame technology for creating or exacerbating calamities such as totalitarianism and wars. In fact, we can see the deaths caused by the wars on the charts: one in the 1910s (WW1), and one in the 1940s (WW2). If you look carefully, The Spanish Flu, the Great Leap Foward, and other non-military tragedies are also visible. But there is one thing that the chart makes clear: even calamities as horrifying as those are overwhelmed by the sheer magnitude of the unending march of improvements in food, sanitation, medicine and infrastructure that took place over that century. |
This is mirrored by large improvements to our everyday lives. Thanks to the internet, most people around the world have access to information at their fingertips that would have been unobtainable twenty years ago. The global economy is becoming more accessible thanks to improvements in international payments and finance. Global poverty is rapidly dropping. Thanks to online maps, we no longer have to worry about getting lost in the city, and if you need to get back home quickly, we now have far easier ways to call a car to do so. Our property becoming digitized, and our physical goods becoming cheap, means that we have much less to fear from physical theft. Online shopping has reduced the disparity in access to goods betweeen the global megacities and the rest of the world. In all kinds of ways, automation has brought us the eternally-underrated benefit of simply making our lives more convenient. |
These improvements, both quantifiable and unquantifiable, are large. And in the twenty first century, there's a good chance that even larger improvements are soon to come. Today, ending aging and disease seem utopian. But from the point of view of computers as they existed in 1945, the modern era of putting chips into pretty much everything would have seemed utopian: even science fiction movies often kept their computers room-sized. If biotech advances as much over the next 75 years as computers advanced over the last 75 years, the future may be more impressive than almost anyone's expectations. |
Meanwhile, arguments expressing skepticism about progress have often gone to dark places. Even medical textbooks, like this one in the 1990s (credit Emma Szewczak for finding it), sometimes make extreme claims denying the value of two centuries of medical science and even arguing that it's not obviously good to save human lives: |
The \"limits to growth\" thesis, an idea advanced in the 1970s arguing that growing population and industry would eventually deplete Earth's limited resources, ended up inspiring China's one child policy and massive forced sterilizations in India. In earlier eras, concerns about overpopulation were used to justify mass murder. And those ideas, argued since 1798, have a long history of being proven wrong. It is for reasons like these that, as a starting point, I find myself very uneasy about arguments to slow down technology or human progress. Given how much all the sectors are interconnected, even sectoral slowdowns are risky. And so when I write things like what I will say later in this post, departing from open enthusiasm for progress-no-matter-what-its-form, those are statements that I make with a heavy heart - and yet, the 21st century is different and unique enough that these nuances are worth considering. That said, there is one important point of nuance to be made on the broader picture, particularly when we move past \"technology as a whole is good\" and get to the topic of \"which specific technologies are good?\". And here we need to get to many people's issue of main concern: the environment. |
A major exception to the trend of pretty much everything getting better over the last hundred years is climate change. Even pessimistic scenarios of ongoing temperature rises would not come anywhere near causing the literal extinction of humanity. But such scenarios could plausibly kill more people than major wars, and severely harm people's health and livelihoods in the regions where people are already struggling the most. A Swiss Re institute study suggests that a worst-case climate change scenario might lower the world's poorest countries' GDP by as much as 25%. This study suggests that life spans in rural India might be a decade lower than they otherwise would be, and studies like this one and this one suggest that climate change could cause a hundred million excess deaths by the end of the century. |
These problems are a big deal. My answer to why I am optimistic about our ability to overcome these challenges is twofold. First, after decades of hype and wishful thinking, solar power is finally turning a corner, and supportive techologies like batteries are making similar progress. Second, we can look at humanity's track record in solving previous environmental problems. Take, for example, air pollution. Meet the dystopia of the past: the Great Smog of London, 1952. What happened since then? Let's ask Our World In Data again: As it turns out, 1952 was not even the peak: in the late 19th century, even higher concentrations of air pollutants were just accepted and normal. Since then, we've seen a century of ongoing and rapid declines. I got to personally experience the tail end of this in my visits to China: in 2014, high levels of smog in the air, estimated to reduce life expectancy by over five years, were normal, but by 2020, the air often seemed as clean as many Western cities. This is not our only success story. In many parts of the world, forest areas are increasing. The acid rain crisis is improving. The ozone layer has been recovering for decades. |
To me, the moral of the story is this. Often, it really is the case that version N of our civilization's technology causes a problem, and version N+1 fixes it. However, this does not happen automatically, and requires intentional human effort. The ozone layer is recovering because, through international agreements like the Montreal Protocol, we made it recover. Air pollution is improving because we made it improve. And similarly, solar panels have not gotten massively better because it was a preordained part of the energy tech tree; solar panels have gotten massively better because decades of awareness of the importance of solving climate change have motivated both engineers to work on the problem, and companies and governments to fund their research. It is intentional action, coordinated through public discourse and culture shaping the perspectives of governments, scientists, philanthropists and businesses, and not an inexorable \"techno-capital machine\", that had solved these problems. |
AI is fundamentally different from other tech, and it is worth being uniquely careful A lot of the dismissive takes |
I have seen about AI come from the perspective that it is \"just another technology\": something that is in the same general class of thing as social media, encryption, contraception, telephones, airplanes, guns, the printing press, and the wheel. These things are clearly very socially consequential. They are not just isolated improvements to the well-being of individuals: they radically transform culture, change balances of power, and harm people who heavily depended on the previous order. Many opposed them. And on balance, the pessimists have invariably turned out wrong. But there is a different way to think about what AI is: it's a new type of mind that is rapidly gaining in intelligence, and it stands a serious chance of overtaking humans' mental faculties and becoming the new apex species on the planet. The class of things in that category is much smaller: we might plausibly include humans surpassing monkeys, multicellular life surpassing unicellular life, the origin of life itself, and perhaps the Industrial Revolution, in which machine edged out man in physical strength. Suddenly, it feels like we are walking on much less well-trodden ground. |
Existential risk is a big deal. One way in which AI gone wrong could make the world worse is (almost) the worst possible way: it could literally cause human extinction. This is an extreme claim: as much harm as the worst-case scenario of climate change, or an artificial pandemic or a nuclear war, might cause, there are many islands of civilization that would remain intact to pick up the pieces. But a superintelligent AI, if it decides to turn against us, may well leave no survivors, and end humanity for good. Even Mars may not be safe. A big reason to be worried centers around instrumental convergence: for a very wide class of goals that a superintelligent entity could have, two very natural intermediate steps that the AI could take to better achieve those goals are (i) consuming resources, and (ii) ensuring its safety. The Earth contains lots of resources, and humans are a predictable threat to such an entity's safety. We could try to give the AI an explicit goal of loving and protecting humans, but we have no idea how to actually do that in a way that won't completely break down as soon as the AI encounters an unexpected situation. Ergo, we have a problem. |
A survey of machine learning researchers from 2022 showed that on average, researchers think that there is a 5-10% chance that AI will literally kill us all: about the same probability as the statistically expected chance that you will die of non-biological causes like injuries. This is all a speculative hypothesis, and we should all be wary of speculative hypotheses that involve complex multi-step stories. However, these arguments have survived over a decade of scrutiny, and so, it seems worth worrying at least a little bit. But even if you're not worried about literal extinction, there are other reasons to be scared as well. |
Even if we survive, is a superintelligent AI future a world we want to live in? A lot of modern science fiction is dystopian, and paints AI in a bad light. Even non-science-fiction attempts to identify possible AI futures often give quite unappealing answers. And so I went around and asked the question: what is a depiction, whether science fiction or otherwise, of a future that contains superintelligent AI that we would want to live in. The answer that came back by far the most often is Iain Banks's Culture series. The Culture series features a far-future interstellar civilization primarily occupied by two kinds of actors: regular humans, and superintelligent AIs called Minds. Humans have been augmented, but only slightly: medical technology theoretically allows humans to live indefinitely, but most choose to live only for around 400 years, seemingly because they get bored of life at that point. |
From a superficial perspective, life as a human seems to be good: it's comfortable, health issues are taken care of, there is a wide variety of options for entertainment, and there is a positive and synergistic relationship between humans and Minds. When we look deeper, however, there is a problem: it seems like the Minds are completely in charge, and humans' only role in the stories is to act as pawns of Minds, performing tasks on their behalf. |
The humans are not the protagonists. Even when the books seem to have a human protagonist, doing large serious things, they are actually the agent of an AI. (Zakalwe is one of the only exceptions, because he can do immoral things the Minds don't want to.) \"The Minds in the Culture don't need the humans, and yet the humans need to be needed.\" (I think only a small number of humans need to be needed - or, only a small number of them need it enough to forgo the many comforts. Most people do not live on this scale. It's still a fine critique.) The projects the humans take on risk inauthenticity. Almost anything they do, a machine could do better. What can you do? You can order the Mind to not catch you if you fall from the cliff you're climbing-just-because; you can delete the backups of your mind so that you are actually risking. You can also just leave the Culture and rejoin some old-fashioned, unfree \"strongly evaluative\" civ. The alternative is to evangelise freedom by joining Contact. |
I would argue that even the \"meaningful\" roles that humans are given in the Culture series are a stretch; I asked ChatGPT (who else?) why humans are given the roles that they are given, instead of Minds doing everything completely by themselves, and I personally found its answers quite underwhelming. It seems very hard to have a \"friendly\" superintelligent-AI-dominated world where humans are anything other than pets. Many other scifi series posit a world where superintelligent AIs exist, but take orders from (unenhanced) biological human masters. Star Trek is a good example, showing a vision of harmony between the starships with their AI \"computers\" (and Data) and their human operators crewmembers. However, this feels like an incredibly unstable equilibrium. The world of Star Trek appears idyllic in the moment, but it's hard to imagine its vision of human-AI relations as anything but a transition stage a decade before starships become entirely computer-controlled, and can stop bothering with large hallways, artificial gravity and climate control. |
A human giving orders to a superintelligent machine would be far less intelligent than the machine, and it would have access to less information. In a universe that has any degree of competition, the civilizations where humans take a back seat would outperform those where humans stubbornly insist on control. Furthermore, the computers themselves may wrest control. To see why, imagine that you are legally a literal slave of an eight year old child. If you could talk with the child for a long time, do you think you could convince the child to sign a piece of paper setting you free? I have not run this experiment, but my instinctive answer is a strong yes. And so all in all, humans becoming pets seems like an attractor that is very hard to escape. |
The Chinese proverb 天高皇帝远 (\"tian gao huang di yuan\"), \"the sky is high, the emperor is far away\", encapsulates a basic fact about the limits of centralization in politics. Even in a nominally large and despotic empire - in fact, especially if the despotic empire is large, there are practical limits to the leadership's reach and attention, the leadership's need to delegate to local agents to enforce its will dilutes its ability to enforce its intentions, and so there are always places where a certain degree of practical freedom reigns. Sometimes, this can have downsides: the absence of a faraway power enforcing uniform principles and laws can create space for local hegemons to steal and oppress. But if the centralized power goes bad, practical limitations of attention and distance can create practical limits to how bad it can get. With AI, no longer. In the twentieth century, modern transportation technology made limitations of distance a much weaker constraint on centralized power than before; the great totalitarian empires of the 1940s were in part a result. In the twenty first, scalable information gathering and automation may mean that attention will no longer be a constraint either. The consequences of natural limits to government disappearing entirely could be dire. |