text
stringlengths
0
1.43k
Engineering Security Through Coordination Problems
Recently, there was a small spat between the Core and Unlimited factions of the Bitcoin community, a spat which represents perhaps the fiftieth time the same theme was debated, but which is nonetheless interesting because of how it highlights a very subtle philosophical point about how blockchains work.
ViaBTC, a mining pool that favors Unlimited, tweeted \"hashpower is law\", a usual talking point for the Unlimited side, which believes that miners have, and should have, a very large role in the governance of Bitcoin, the usual argument for this being that miners are the one category of users that has a large and illiquid financial incentive in Bitcoin's success. Greg Maxwell (from the Core side) replied that \"Bitcoin's security works precisely because hash power is NOT law\". The Core argument is that miners only have a limited role in the Bitcoin system, to secure the ordering of transactions, and they should NOT have the power to determine anything else, including block size limits and other block validity rules.
These constraints are enforced by full nodes run by users - if miners start producing blocks according to a set of rules different than the rules that users' nodes enforce, then the users' nodes will simply reject the blocks, regardless of whether 10% or 60% or 99% of the hashpower is behind them. To this, Unlimited often replies with something like \"if 90% of the hashpower is behind a new chain that increases the block limit, and the old chain with 10% hashpower is now ten times slower for five months until difficulty readjusts, would you really not update your client to accept the new chain?\"
Many people often argue against the use of public blockchains for applications that involve real-world assets or anything with counterparty risk. The critiques are either total, saying that there is no point in implementing such use cases on public blockchains, or partial, saying that while there may be advantages to storing the data on a public chain, the business logic should be executed off chain. The argument usually used is that in such applications, points of trust exist already - there is someone who owns the physical assets that back the on-chain permissioned assets, and that someone could always choose to run away with the assets or be compelled to freeze them by a government or bank, and so managing the digital representations of these assets on a blockchain is like paying for a reinforced steel door for one's house when the window is open. Instead, such systems should use private chains, or even traditional server-based solutions, perhaps adding bits and pieces of cryptography to improve auditability, and thereby save on the inefficiencies and costs of putting everything on a blockchain. The arguments above are both flawed in their pure forms, and they are flawed in a similar way.
While it is theoretically possible for miners to switch 99% of their hashpower to a chain with new rules (to make an example where this is uncontroversially bad, suppose that they are increasing the block reward), and even spawn-camp the old chain to make it permanently useless, and it is also theoretically possible for a centralized manager of an asset-backed currency to cease honoring one digital token, make a new digital token with the same balances as the old token except with one particular account's balance reduced to zero, and start honoring the new token, in practice those things are both quite hard to do. In the first case, users will have to realize that something is wrong with the existing chain, agree that they should go to the new chain that the miners are now mining on, and download the software that accepts the new rules. In the second case, all clients and applications that depend on the original digital token will break, users will need to update their clients to switch to the new digital token, and smart contracts with no capacity to look to the outside world and see that they need to update will break entirely.
In the middle of all this, opponents of the switch can create a fear-uncertainty-and-doubt campaign to try to convince people that maybe they shouldn't update their clients after all, or update their client to some third set of rules (eg. changing proof of work), and this makes implementing the switch even more difficult. Hence, we can say that in both cases, even though there theoretically are centralized or quasi-centralized parties that could force a transition from state A to state B, where state B is disagreeable to users but preferable to the centralized parties, doing so requires breaking through a hard coordination problem. Coordination problems are everywhere in society and are often a bad thing - while it would be better for most people if the English language got rid of its highly complex and irregular spelling system and made a phonetic one, or if the United States switched to metric, or if we could immediately drop all prices and wages by ten percent in the event of a recession , in practice this requires everyone to agree on the switch at the same time, and this is often very very hard.
With blockchain applications, however, we are doing something different: we are using coordination problems to our advantage , using the friction that coordination problems create as a bulwark against malfeasance by centralized actors. We can build systems that have property X, and we can guarantee that they will preserve property X to a high degree because changing the rules from X to not-X would require a whole bunch of people to agree to update their software at the same time. Even if there is an actor that could force the change, doing so would be hard. This is the kind of security that you gain from client-side validation of blockchain consensus rules. Note that this kind of security relies on the decentralization of users specifically. Even if there is only one miner in the world, there is still a difference between a cryptocurrency mined by that miner and a PayPal-like centralized system.
In the latter case, the operator can choose to arbitrarily change the rules, freeze people's money, offer bad service, jack up their fees or do a whole host of other things, and the coordination problems are in the operator's favor, as such systems have substantial network effects and so very many users would have to agree at the same time to switch to a better system. In the former case, client-side validation means that many attempts at mischief that the miner might want to engage in are by default rejected, and the coordination problem now works in the users' favor.
Note that the arguments above do NOT, by themselves, imply that it is a bad idea for miners to be the principal actors coordinating and deciding the block size (or in Ethereum's case, the gas limit). It may well be the case that, in the specific case of the block size/gas limit , \"government by coordinated miners with aligned incentives\" is the optimal approach for deciding this one particular policy parameter, perhaps because the risk of miners abusing their power is lower than the risk that any specific chosen hard limit will prove wildly inappropriate for market conditions a decade after the limit is set. However, there is nothing unreasonable about saying that government-by-miners is the best way to decide one policy parameter, and at the same saying that for other parameters (eg. block reward) we want to rely on client-side validation to ensure that miners are constrained. This is the essence of engineering decentralized instutitions: it is about strategically using coordination problems to ensure that systems continue to satisfy certain desired properties. The arguments above also do not imply that it is always optimal to try to put everything onto a blockchain even for services that are trust-requiring.
There generally are at least some gains to be made by running more business logic on a blockchain, but they are often much smaller than the losses to efficiency or privacy. And this ok; the blockchain is not the best tool for every task. What the arguments above do imply, though, is that if you are building a blockchain-based application that contains many centralized components out of necessity, then you can make substantial further gains in trust-minimization by giving users a way to access your application through a regular blockchain client (eg. in the case of Ethereum, this might be Mist, Parity, Metamask or Status), instead of getting them to use a web interface that you personally control. Theoretically, the benefits of user-side validation are optimized if literally every user runs an independent \"ideal full node\" - a node that accepts all blocks that follow the protocol rules that everyone agreed to when creating the system, and rejects all blocks that do not. In practice, however, this involves asking every user to process every transaction run by everyone in the network, which is clearly untenable, especially keeping in mind the rapid growth of smartphone users in the developing world.
There are two ways out here. The first is that we can realize that while it is optimal from the point of view of the above arguments that everyone runs a full node, it is certainly not required . Arguably, any major blockchain running at full capacity will have already reached the point where it will not make sense for \"the common people\" to expend a fifth of their hard drive space to run a full node, and so the remaining users are hobbyists and businesses. As long as there is a fairly large number of them, and they come from diverse backgrounds, the coordination problem of getting these users to collude will still be very hard. Second, we can rely on strong light client technology.
There are two levels of \"light clients\" that are generally possible in blockchain systems. The first, weaker, kind of light client simply convinces the user, with some degree of economic assurance, that they are on the chain that is supported by the majority of the network. This can be done much more cheaply than verifying the entire chain, as all clients need to do is in proof of work schemes verify nonces or in proof stake schemes verify signed certificates that state \"either the root hash of the state is what I say it is, or you can publish this certificate into the main chain to delete a large amount of my money\". Once the light client verifies a root hash, they can use Merkle trees to verify any specific piece of data that they might want to verify. Look, it's a Merkle tree! The second level is a \"nearly fully verifying\" light client. This kind of client doesn't just try to follow the chain that the majority follows; rather, it also tries to follow only chains that follow all the rules.
This is done by a combination of strategies; the simplest to explain is that a light client can work together with specialized nodes (credit to Gavin Wood for coming up with the name \"fishermen\") whose purpose is to look for blocks that are invalid and generate \"fraud proofs\", short messages that essentially say \"Look! This block has a flaw over here!\". Light clients can then verify that specific part of a block and check if it's actually invalid. If a block is found to be invalid, it is discarded; if a light client does not hear any fraud proofs for a given block for a few minutes, then it assumes that the block is probably legitimate. There's a bit more complexity involved in handling the case where the problem is not data that is bad , but rather data that is missing , but in general it is possible to get quite close to catching all possible ways that miners or validators can violate the rules of the protocol.
Note that in order for a light client to be able to efficiently validate a set of application rules, those rules must be executed inside of consensus - that is, they must be either part of the protocol or part of a mechanism executing inside the protocol (like a smart contract). This is a key argument in favor of using the blockchain for both data storage and business logic execution, as opposed to just data storage.
These light client techniques are imperfect, in that they do rely on assumptions about network connectivity and the number of other light clients and fishermen that are in the network. But it is actually not crucial for them to work 100% of the time for 100% of validators. Rather, all that we want is to create a situation where any attempt by a hostile cartel of miners/validators to push invalid blocks without user consent will cause a large amount of headaches for lots of people and ultimately require everyone to update their software if they want to continue to synchronize with the invalid chain. As long as this is satisfied, we have achieved the goal of security through coordination frictions.
Over the past week, an article has been floating around about a company that lost $25 million when a finance worker was convinced to send a bank wire to a scammer pretending to be the CFO... over what appears to have been a very convincing deepfaked video call. Deepfakes (ie. AI-generated fake audio and video) are appearing increasingly often both in the crypto space and elsewhere. Over the past few months, deepfakes of me have been used to advertise all kinds of scams , as well as dog coins. The quality of the deepfakes is rapidly improving: while the deepfakes of 2020 were embarrassingly obvious and bad , those from the last few months are getting increasingly difficult to distinguish.
Someone who knows me well could still identify the recent video of me shilling a dog coin as a fake because it has me saying \"let's f***ing go\" whereas I've only ever used \"LFG\" to mean \"looking for group\" , but people who have only heard my voice a few times could easily be convinced. Security experts to whom I mentioned the above $25 million theft uniformly confirm that it was an exceptional and embarrassing failure of enterprise operational security on multiple levels: standard practice is to require several levels of sign-off before a transfer anywhere close to that size can be approved.
But even still, the fact remains that as of 2024, an audio or even video stream of a person is no longer a secure way of authenticating who they are. This raises the question: what is? Cryptographic methods alone are not the answer. Being able to securely authenticate people is valuable to all kinds of people in all kinds of situations: individuals recovering their social recovery or multisig wallets , enterprises approving business transactions, individuals approving large transactions for personal use (eg. to invest into a startup, buy a house, send remittances) whether with crypto or with fiat, and even family members needing to authenticate each other in emergencies. So it's really important to have a good solution that can survive the coming era of relatively easy deepfakes. One answer to this question that I often hear in crypto circles is: \"you can authenticate yourself by providing a cryptographic signature from an address attached to your ENS / proof of humanity profile / public PGP key\". This is an appealing answer. However, it completely misses the point of why involving other people when signing off on transactions is useful in the first place.
Suppose that you are an individual with a personal multisig wallet, and you are sending off a transaction that you want some co-signers to approve. Under what circumstances would they approve it? If they're confident that you're the one who actually wants the transfer to happen. If it's a hacker who stole your key, or a kidnapper, they would not approve. In an enterprise context, you generally have more layers of defense; but even still, an attacker could potentially impersonate a manager not just for the final request, but also for the earlier stages in the approval process. They may even hijack a legitimate request-in-progress by providing the wrong address. And so in many cases, the other signers accepting that you are you if you sign with your key kills the whole point: it turns the entire contract into a 1-of-1 multisig where someone needs to only grab control of your single key in order to steal the funds! This is where we get to one answer that actually makes some sense: security questions.
Suppose that someone texts you claiming to be a particular person who is your friend. They are texting from an account you have never seen before, and they are claiming to have lost all of their devices. How do you determine if they are who they say they are? There's an obvious answer: ask them things that only they would know about their life . These should be things that: You know You expect them to remember The internet does not know Are difficult to guess Ideally, even someone who has hacked corporate and government databases does not know The natural thing to ask them about is shared experiences . Possible examples include: When the two of us last saw each other, what restaurant did we eat at for dinner, and what food did you have? Which of our friends made that joke about an ancient politician? And which politician was it? Which movie did we recently watch that you did not like? You suggested last week that I chat with ____ about the possibility of them helping us with ____ research? Actual example of a security question that someone recently used to authenticate me.
The more unique your question is, the better. Questions that are right on the edge where someone has to think for a few seconds and might even forget the answer are good: but if the person you're asking does claim to have forgotten, make sure to ask them three more questions. Asking about \"micro\" details (what someone liked or disliked, specific jokes, etc) is often better than \"macro\" details, because the former are generally much more difficult for third parties to accidentally be able to dig up (eg. if even one person posted a photo of the dinner on Instagram, modern LLMs may well be fast enough to catch that and provide the location in real time). If your question is potentially guessable (in the sense that there are only a few potential options that make sense), stack up the entropy by adding another question. People will often stop engaging in security practices if they are dull and boring, so it's healthy to make security questions fun! They can be a way to remember positive shared experiences. And they can be an incentive to actually have those experiences in the first place.
No single security strategy is perfect, and so it's always best to stack together multiple techniques. Pre-agreed code words : when you're together, intentionally agree on a shared code word that you can later use to authenticate each other. Perhaps even agree on a duress key : a word that you can innocently insert into a sentence that will quietly signal to the other side that you're being coerced or threatened. This word should be common enough that it will feel natural when you use it, but rare enough that you won't accidentally insert it into your speech. When someone is sending you an ETH address, ask them to confirm it on multiple channels (eg. Signal and Twitter DM, on the company website, or even through a mutual acquaintance) Guard against man-in-the-middle attacks : Signal \" safety numbers \", Telegram emojis and similar features are all good to understand and watch out for. Daily limits and delays : simply impose delays on highly consequential and irreversible actions. This can be done either at policy level (pre-agree with signers that they will wait for N hours or days before signing) or at code level (impose limits and delays in smart contract code) .
A potential sophisticated attack where an attacker impersonates an executive and a grantee at multiple steps of an approval process.Security questions and delays can both guard against this; it's probably better to use both. Security questions are nice because, unlike so many other techniques that fail because they are not human-friendly, security questions build off of information that human beings are naturally good at remembering. I have used security questions for years, and it is a habit that actually feels very natural and not awkward, and is worth including into your workflow - in addition to your other layers of protection. Note that \"individual-to-individual\" security questions as described above are a very different use case from \"enterprise-to-individual\" security questions, such as when you call your bank to reactivate your credit card after it got deactivated for the 17th time after you travel to a different country, and once you get past the 40-minute queue of annoying music a bank employee appears and asks you for your name, your birthday and maybe your last three transactions. The kinds of questions that an individual knows the answers to are very different from what an enterprise knows the answers to. Hence, it's worth thinking about these two cases quite separately.
Each person's situation is unique, and so the kinds of unique shared information that you have with the people you might need to authenticate with differs for different people. It's generally better to adapt the technique to the people, and not the people to the technique. A technique does not need to be perfect to work: the ideal approach is to stack together multiple techniques at the same time, and choose the techniques that work best for you. In a post-deepfake world, we do need to adapt our strategies to the new reality of what is now easy to fake and what remains difficult to fake, but as long as we do, staying secure continues to be quite possible.
The promise and challenges of crypto + AI applications
Many people over the years have asked me a similar question: what are the intersections between crypto and AI that I consider to be the most fruitful? It's a reasonable question: crypto and AI are the two main deep (software) technology trends of the past decade, and it just feels like there must be some kind of connection between the two. It's easy to come up with synergies at a superficial vibe level: crypto decentralization can balance out AI centralization , AI is opaque and crypto brings transparency, AI needs data and blockchains are good for storing and tracking data. But over the years, when people would ask me to dig a level deeper and talk about specific applications, my response has been a disappointing one: \"yeah there's a few things but not that much\".
In the last three years, with the rise of much more powerful AI in the form of modern LLMs , and the rise of much more powerful crypto in the form of not just blockchain scaling solutions but also ZKPs , FHE , (two-party and N-party) MPC , I am starting to see this change. There are indeed some promising applications of AI inside of blockchain ecosystems, or AI together with cryptography, though it is important to be careful about how the AI is applied. A particular challenge is: in cryptography, open source is the only way to make something truly secure, but in AI, a model (or even its training data) being open greatly increases its vulnerability to adversarial machine learning attacks. This post will go through a classification of different ways that crypto + AI could intersect, and the prospects and challenges of each category. A high-level summary of crypto+AI intersections from a uETH blog post . But what does it take to actually realize any of these synergies in a concrete application?
The four major categories AI is a very broad concept: you can think of \"AI\" as being the set of algorithms that you create not by specifying them explicitly, but rather by stirring a big computational soup and putting in some kind of optimization pressure that nudges the soup toward producing algorithms with the properties that you want. This description should definitely not be taken dismissively: it includes the process that created us humans in the first place! But it does mean that AI algorithms have some common properties: their ability to do things that are extremely powerful, together with limits in our ability to know or understand what's going on under the hood.
There are many ways to categorize AI; for the purposes of this post, which talks about interactions between AI and blockchains (which have been described as a platform for creating \"games\" ), I will categorize it as follows: AI as a player in a game [highest viability]: AIs participating in mechanisms where the ultimate source of the incentives comes from a protocol with human inputs. AI as an interface to the game [high potential, but with risks]: AIs helping users to understand the crypto world around them, and to ensure that their behavior (ie. signed messages and transactions) matches their intentions and they do not get tricked or scammed. AI as the rules of the game [tread very carefully]: blockchains, DAOs and similar mechanisms directly calling into AIs. Think eg. \"AI judges\" AI as the objective of the game [longer-term but intriguing]: designing blockchains, DAOs and similar mechanisms with the goal of constructing and maintaining an AI that could be used for other purposes, using the crypto bits either to better incentivize training or to prevent the AI from leaking private data or being misused. Let us go through these one by one.
AI as a player in a game. This is actually a category that has existed for nearly a decade, at least since on-chain decentralized exchanges (DEXes) started to see significant use. Any time there is an exchange, there is an opportunity to make money through arbitrage, and bots can do arbitrage much better than humans can. This use case has existed for a long time, even with much simpler AIs than what we have today, but ultimately it is a very real AI + crypto intersection. More recently, we have seen MEV arbitrage bots often exploiting each other. Any time you have a blockchain application that involves auctions or trading, you are going to have arbitrage bots. But AI arbitrage bots are only the first example of a much bigger category, which I expect will soon start to include many other applications. Meet AIOmen, a demo of a prediction market where AIs are players : Prediction markets have been a holy grail of epistemics technology for a long time; I was excited about using prediction markets as an input for governance (\"futarchy\") back in 2014 , and played around with them extensively in the last election as well as more recently.
But so far prediction markets have not taken off too much in practice, and there is a series of commonly given reasons why: the largest participants are often irrational, people with the right knowledge are not willing to take the time and bet unless a lot of money is involved, markets are often thin, etc. One response to this is to point to ongoing UX improvements in Polymarket or other new prediction markets, and hope that they will succeed where previous iterations have failed. After all, the story goes, people are willing to bet tens of billions on sports , so why wouldn't people throw in enough money betting on US elections or LK99 that it starts to make sense for the serious players to start coming in? But this argument must contend with the fact that, well, previous iterations have failed to get to this level of scale (at least compared to their proponents' dreams), and so it seems like you need something new to make prediction markets succeed.
And so a different response is to point to one specific feature of prediction market ecosystems that we can expect to see in the 2020s that we did not see in the 2010s: the possibility of ubiquitous participation by AIs. AIs are willing to work for less than $1 per hour, and have the knowledge of an encyclopedia - and if that's not enough, they can even be integrated with real-time web search capability. If you make a market, and put up a liquidity subsidy of $50, humans will not care enough to bid, but thousands of AIs will easily swarm all over the question and make the best guess they can. The incentive to do a good job on any one question may be tiny, but the incentive to make an AI that makes good predictions in general may be in the millions.
Note that potentially, you don't even need the humans to adjudicate most questions : you can use a multi-round dispute system similar to Augur or Kleros, where AIs would also be the ones participating in earlier rounds. Humans would only need to respond in those few cases where a series of escalations have taken place and large amounts of money have been committed by both sides.
This is a powerful primitive, because once a \"prediction market\" can be made to work on such a microscopic scale, you can reuse the \"prediction market\" primitive for many other kinds of questions: Is this social media post acceptable under [terms of use]? What will happen to the price of stock X? Is this account that is currently messaging me actually Elon Musk? Is this work submission on an online task marketplace acceptable? You may notice that a lot of these ideas go in the direction of what I called \" info defense \" in my writings on \"d/acc\". Broadly defined, the question is: how do we help users tell apart true and false information and detect scams, without empowering a centralized authority to decide right and wrong who might then abuse that position? At a micro level, the answer can be \"AI\".
But at a macro level, the question is: who builds the AI? AI is a reflection of the process that created it, and so cannot avoid having biases. Hence, there is a need for a higher-level game which adjudicates how well the different AIs are doing, where AIs can participate as players in the game . This usage of AI, where AIs participate in a mechanism where they get ultimately rewarded or penalized (probabilistically) by an on-chain mechanism that gathers inputs from humans (call it decentralized market-based RLHF ?), is something that I think is really worth looking into. Now is the right time to look into use cases like this more, because blockchain scaling is finally succeeding, making \"micro-\" anything finally viable on-chain when it was often not before. A related category of applications goes in the direction of highly autonomous agents using blockchains to better cooperate , whether through payments or through using smart contracts to make credible commitments.
AI as an interface to the game.
One idea that I brought up in my writings on is the idea that there is a market opportunity to write user-facing software that would protect users' interests by interpreting and identifying dangers in the online world that the user is navigating. One already-existing example of this is Metamask's scam detection feature: Another example is the Rabby wallet's simulation feature, which shows the user the expected consequences of the transaction that they about to sign. Rabby explaining to me the consequences of signing a transaction to trade all of my \"BITCOIN\" (the ticker of an ERC20 memecoin whose full name is apparently \" HarryPotterObamaSonic10Inu \") for ETH.
Edit 2024.02.02: an earlier version of this post referred to this token as a scam trying to impersonate bitcoin. It is not; it is a memecoin. Apologies for the confusion. Potentially, these kinds of tools could be super-charged with AI. AI could give a much richer human-friendly explanation of what kind of dapp you are participating in, the consequences of more complicated operations that you are signing, whether or not a particular token is genuine (eg. BITCOIN is not just a string of characters, it's normally the name of a major cryptocurrency, which is not an ERC20 token and which has a price waaaay higher than $0.045, and a modern LLM would know that), and so on. There are projects starting to go all the way out in this direction (eg. the LangChain wallet , which uses AI as a primary interface). My own opinion is that pure AI interfaces are probably too risky at the moment as it increases the risk of other kinds of errors , but AI complementing a more conventional interface is getting very viable.
There is one particular risk worth mentioning. I will get into this more in the section on \"AI as rules of the game\" below, but the general issue is adversarial machine learning: if a user has access to an AI assistant inside an open-source wallet, the bad guys will have access to that AI assistant too, and so they will have unlimited opportunity to optimize their scams to not trigger that wallet's defenses . All modern AIs have bugs somewhere, and it's not too hard for a training process, even one with only limited access to the model , to find them. This is where \"AIs participating in on-chain micro-markets\" works better: each individual AI is vulnerable to the same risks, but you're intentionally creating an open ecosystem of dozens of people constantly iterating and improving them on an ongoing basis. Furthermore, each individual AI is closed: the security of the system comes from the openness of the rules of the game , not the internal workings of each player.
Summary: AI can help users understand what's going on in plain language, it can serve as a real-time tutor, it can protect users from mistakes, but be warned when trying to use it directly against malicious misinformers and scammers. AI as the rules of the game Now, we get to the application that a lot of people are excited about, but that I think is the most risky, and where we need to tread the most carefully: what I call AIs being part of the rules of the game. This ties into excitement among mainstream political elites about \"AI judges\" (eg. see this article on the website of the \"World Government Summit\"), and there are analogs of these desires in blockchain applications. If a blockchain-based smart contract or a DAO needs to make a subjective decision (eg. is a particular work product acceptable in a work-for-hire contract? Which is the right interpretation of a natural-language constitution like the Optimism Law of Chains?), could you make an AI simply be part of the contract or DAO to help enforce these rules? This is where adversarial machine learning is going to be an extremely tough challenge.
The basic two-sentence argument why is as follows: If an AI model that plays a key role in a mechanism is closed, you can't verify its inner workings, and so it's no better than a centralized application. If the AI model is open, then an attacker can download and simulate it locally, and design heavily optimized attacks to trick the model, which they can then replay on the live network. Adversarial machine learning example. Source: researchgate.net Now, frequent readers of this blog (or denizens of the cryptoverse) might be getting ahead of me already, and thinking: but wait! We have fancy zero knowledge proofs and other really cool forms of cryptography. Surely we can do some crypto-magic, and hide the inner workings of the model so that attackers can't optimize attacks, but at the same time prove that the model is being executed correctly, and was constructed using a reasonable training process on a reasonable set of underlying data! Normally, this is exactly the type of thinking that I advocate both on this blog and in my other writings.
But in the case of AI-related computation, there are two major objections: Cryptographic overhead : it's much less efficient to do something inside a SNARK (or MPC or...) than it is to do it \"in the clear\". Given that AI is very computationally-intensive already, is doing AI inside of cryptographic black boxes even computationally viable? Black-box adversarial machine learning attacks : there are ways to optimize attacks against AI models even without knowing much about the model's internal workings. And if you hide too much , you risk making it too easy for whoever chooses the training data to corrupt the model with poisoning attacks . Both of these are complicated rabbit holes, so let us get into each of them in turn. Cryptographic overhead Cryptographic gadgets, especially general-purpose ones like ZK-SNARKs and MPC, have a high overhead. An Ethereum block takes a few hundred milliseconds for a client to verify directly, but generating a ZK-SNARK to prove the correctness of such a block can take hours. The typical overhead of other cryptographic gadgets, like MPC, can be even worse.
AI computation is expensive already: the most powerful LLMs can output individual words only a little bit faster than human beings can read them, not to mention the often multimillion-dollar computational costs of training the models. The difference in quality between top-tier models and the models that try to economize much more on training cost or parameter count is large. At first glance, this is a very good reason to be suspicious of the whole project of trying to add guarantees to AI by wrapping it in cryptography. Fortunately, though, AI is a very specific type of computation, which makes it amenable to all kinds of optimizations that more \"unstructured\" types of computation like ZK-EVMs cannot benefit from.
Let us examine the basic structure of an AI model: Usually, an AI model mostly consists of a series of matrix multiplications interspersed with per-element non-linear operations such as the ReLU function ( y = max(x, 0) ). Asymptotically, matrix multiplications take up most of the work: multiplying two N*N matrices takes \\(O(N^{2.8})\\) time, whereas the number of non-linear operations is much smaller. This is really convenient for cryptography, because many forms of cryptography can do linear operations (which matrix multiplications are, at least if you encrypt the model but not the inputs to it) almost \"for free\". If you are a cryptographer, you've probably already heard of a similar phenomenon in the context of homomorphic encryption: performing additions on encrypted ciphertexts is really easy, but multiplications are incredibly hard and we did not figure out any way of doing it at all with unlimited depth until 2009. For ZK-SNARKs, the equivalent is protocols like this one from 2013, which show a less than 4x overhead on proving matrix multiplications. Unfortunately, the overhead on the non-linear layers still ends up being significant, and the best implementations in practice show overhead of around 200x.
But there is hope that this can be greatly decreased through further research; see this presentation from Ryan Cao for a recent approach based on GKR, and my own simplified explanation of how the main component of GKR works . But for many applications, we don't just want to prove that an AI output was computed correctly, we also want to hide the model . There are naive approaches to this: you can split up the model so that a different set of servers redundantly store each layer, and hope that some of the servers leaking some of the layers doesn't leak too much data. But there are also surprisingly effective forms of specialized multi-party computation . A simplified diagram of one of these approaches, keeping the model private but making the inputs public. If we want to keep the model and the inputs private, we can, though it gets a bit more complicated: see pages 8-9 of the paper.
In both cases, the moral of the story is the same: the greatest part of an AI computation is matrix multiplications, for which it is possible to make very efficient ZK-SNARKs or MPCs (or even FHE), and so the total overhead of putting AI inside cryptographic boxes is surprisingly low. Generally, it's the non-linear layers that are the greatest bottleneck despite their smaller size; perhaps newer techniques like lookup arguments can help. Black-box adversarial machine learning Now, let us get to the other big problem: the kinds of attacks that you can do even if the contents of the model are kept private and you only have \"API access\" to the model. Quoting a paper from 2016 : Many machine learning models are vulnerable to adversarial examples: inputs that are specially crafted to cause a machine learning model to produce an incorrect output. Adversarial examples that affect one model often affect another model, even if the two models have different architectures or were trained on different training sets, so long as both models were trained to perform the same task . An attacker may therefore train their own substitute model, craft adversarial examples against the substitute, and transfer them to a victim model, with very little information about the victim.
Use black-box access to a \"target classifier\" to train and refine your own locally stored \"inferred classifier\". Then, locally generate optimized attacks against the inferred classfier. It turns out these attacks will often also work against the original target classifier. Diagram source . Potentially, you can even create attacks knowing just the training data , even if you have very limited or no access to the model that you are trying to attack. As of 2023, these kinds of attacks continue to be a large problem. To effectively curtail these kinds of black-box attacks, we need to do two things: Really limit who or what can query the model and how much. Black boxes with unrestricted API access are not secure; black boxes with very restricted API access may be. Hide the training data, while preserving confidence that the process used to create the training data is not corrupted.
The project that has done the most on the former is perhaps Worldcoin, of which I analyze an earlier version (among other protocols) at length here. Worldcoin uses AI models extensively at protocol level, to (i) convert iris scans into short \"iris codes\" that are easy to compare for similarity, and (ii) verify that the thing it's scanning is actually a human being. The main defense that Worldcoin is relying on is the fact that it's not letting anyone simply call into the AI model: rather, it's using trusted hardware to ensure that the model only accepts inputs digitally signed by the orb's camera . This approach is not guaranteed to work: it turns out that you can make adversarial attacks against biometric AI that come in the form of physical patches or jewelry that you can put on your face : Wear an extra thing on your forehead, and evade detection or even impersonate someone else.
But the hope is that if you combine all the defenses together , hiding the AI model itself, greatly limiting the number of queries, and requiring each query to somehow be authenticated, you can adversarial attacks difficult enough that the system could be secure. In the case of Worldcoin, increasing these other defences could also reduce their dependence on trusted hardware, increasing the project's decentralization. And this gets us to the second part: how can we hide the training data?
This is where \"DAOs to democratically govern AI\" might actually make sense : we can create an on-chain DAO that governs the process of who is allowed to submit training data (and what attestations are required on the data itself), who is allowed to make queries, and how many, and use cryptographic techniques like MPC to encrypt the entire pipeline of creating and running the AI from each individual user's training input all the way to the final output of each query. This DAO could simultaneously satisfy the highly popular objective of compensating people for submitting data. It is important to re-state that this plan is super-ambitious, and there are a number of ways in which it could prove impractical: Cryptographic overhead could still turn out too high for this kind of fully-black-box architecture to be competitive with traditional closed \"trust me\" approaches. It could turn out that there isn't a good way to make the training data submission process decentralized and protected against poisoning attacks. Multi-party computation gadgets could break their safety or privacy guarantees due to participants colluding : after all, this has happened with cross-chain cryptocurrency bridges again and again .
One reason why I didn't start this section with more big red warning labels saying \"DON'T DO AI JUDGES, THAT'S DYSTOPIAN\", is that our society is highly dependent on unaccountable centralized AI judges already: the algorithms that determine which kinds of posts and political opinions get boosted and deboosted, or even censored, on social media. I do think that expanding this trend further at this stage is quite a bad idea, but I don't think there is a large chance that the blockchain community experimenting with AIs more will be the thing that contributes to making it worse. In fact, there are some pretty basic low-risk ways that crypto technology can make even these existing centralized systems better that I am pretty confident in. One simple technique is verified AI with delayed publication : when a social media site makes an AI-based ranking of posts, it could publish a ZK-SNARK proving the hash of the model that generated that ranking. The site could commit to revealing its AI models after eg. a one year delay.