Original title: Understanding the Intersection of Crypto and AI
Original author: Lucas Tcheyan
Original compilation: Rhythm Workers, BlockBeats
Table of contents
introduce
Explanation of core concepts terminology
Artificial Intelligence + Cryptocurrency Panorama
decentralized computing
Overview
decentralized computing
Vertical fields
general computing
Secondary market
Decentralized machine learning training
Decentralized general artificial intelligence
Build a decentralized computing stack for AI models
Other decentralized products
Outlook
Smart contracts and zero-knowledge machine learning (zkML)
Zero-knowledge machine learning (zkML)
Infrastructure and tools
coprocessor
application
Outlook
artificial intelligence agent
Agency provider
Bitcoin and AI Agents
Outlook
in conclusion
introduce
The emergence of blockchain is arguably one of the most important advancements in the history of computer science. At the same time, the development of artificial intelligence will have, and already has, a profound impact on our world.If blockchain technology provides a new paradigm for transaction settlement, data storage and system design, artificial intelligence is a revolution in computing, analysis and content production.Innovations in both industries are unlocking new use cases that may accelerate adoption of both in the coming years. This report explores the integration of cryptocurrency and artificial intelligence, focusing on new use cases that seek to bridge the gap between the two and harness the power of both. Specifically, this report examines projects for decentralized computing protocols, zero-knowledge machine learning (zkML) infrastructure, and artificial intelligence agents.
Cryptocurrency provides a permissionless, trustless, composable settlement layer for artificial intelligence.This unlocks more use cases, such as making hardware more accessible through decentralized computing systems, building artificial intelligence agents that can perform complex tasks that require value exchange, and developing identity and provenance solutions to combat Sybil attacks and deepfake technologies ( Deepfake). Artificial intelligence brings many benefits to cryptocurrencies, as has already been seen in Web 2. This includes enhanced user experience (UX) for users and developers through large language models such as ChatGPT and specially trained versions of Copilot, as well as the potential to significantly improve smart contract functionality and automation. Blockchain is the transparent, data-rich environment required for artificial intelligence. But the computing power of blockchain is also limited, which is a major obstacle to directly integrating artificial intelligence models.
The driving force behind ongoing experimentation and eventual adoption at the intersection of cryptocurrency and artificial intelligence is exactly what is driving many of cryptocurrency’s most promising use cases — a permissionless, trustless orchestration layer that better facilitates Value transfer. Given its enormous potential, players in the field need to understand the fundamental ways in which these two technologies intersect.
Core ideas:
In the near future (6 months to 1 year), the integration of cryptocurrency and AI will be dominated by AI applications that increase developer efficiency, smart contract auditability and security, and user accountability. usability. These integrations are not cryptocurrency-specific but enhance the experience for on-chain developers and users.
Just as there is a severe shortage of high-performance GPUs, decentralized computing products are developing AI-customized GPU products, which provides strong support for their adoption.
User experience and regulation remain barriers for decentralized computing customers. However, recent developments in OpenAI and ongoing regulatory scrutiny in the United States highlight the value proposition of permissionless, censorship-resistant, decentralized artificial intelligence networks.
On-chain AI integration, especially smart contracts capable of using AI models, requires improvements in zkML technology and other computational methods that validate off-chain computations. A lack of comprehensive tools and development talent and high costs are barriers to adoption.
AI agents are ideal for cryptocurrencies, where users (or the agents themselves) can create wallets to transact with other services, agents or individuals. This is currently impossible to achieve under traditional financial channels. For wider adoption, additional integrations with non-crypto products are required.
Terminology explanation:
Artificial intelligence (Artificial Intelligence) uses computing and machines to imitate human reasoning and problem-solving abilities.
Neural Networks are a training method for AI models. They process input data through a series of algorithmic layers, continuously optimizing until they produce the desired output. Neural networks consist of equations with modifiable weights that can be modified to change the output. They can require large amounts of data and calculations to train to ensure their output is accurate. This is one of the most common ways to develop AI models (for example, ChatGPT uses a Transformer-based neural network process).
Training is the process of developing neural networks and other AI models. It requires large amounts of data to train the model to correctly interpret the input and produce accurate output. During the training process, the weights of the model equations are continuously modified until satisfactory output is produced. Training costs can be very expensive. For example, ChatGPT uses tens of thousands of GPUs to process its data. Teams with fewer resources often rely on dedicated computing providers like Amazon Web Services, Azure, and Google Cloud Providers.
Inference is the process of actually using an AI model to obtain output or results (for example, using ChatGPT to write this report outline). Inference is used both during training and in the final product. Due to the computational cost, they can be expensive to run even after training is complete, but are less computationally intensive than training.
Zero Knowledge Proofs (ZKP) allow claims to be verified without revealing underlying information. This serves two main purposes in cryptocurrencies: 1. Privacy and 2. Scalability. For privacy, this enables users to conduct transactions without revealing sensitive information like how much ETH is in the wallet. For scalability, it enables off-chain computations to be proven faster on-chain without having to re-perform the computations. This allows blockchains and applications to run computations off-chain and then verify them on-chain.
Artificial Intelligence + Cryptocurrency Panorama

Projects at the intersection of AI and cryptocurrency are still developing the infrastructure needed to support on-chain AI interactions at scale.
The decentralized computing market is emerging to provide large amounts of physical hardware, primarily GPUs, for training and inference of artificial intelligence models. These two-sided markets connect those who rent and seek to rent computing resources, facilitating the transfer of value and verification of computations. Within decentralized computing, several subcategories are emerging that offer additional functionality. In addition to the two-sided market, this report will examine machine learning training providers that specialize in verifiable training and fine-tuning outputs, as well as projects dedicated to connecting computation and model generation to achieve artificial general intelligence, also often referred to as intelligent incentive networks. .
zkML is a focus area looking to provide verifiable model output on-chain in a cost-effective and timely manner. These projects primarily enable applications to handle heavy computational requests off-chain and then publish verifiable output on-chain, proving that the off-chain work is complete and accurate. zkML is expensive and time-consuming in current instances, but is increasingly being used as a solution. This is evident in the increasing number of integrations between zkML providers and DeFi/games that want to leverage AI models.
The ample supply of computing resources and the ability to verify computations on-chain opens the door to on-chain AI agents. Agents are models that are trained to perform requests on behalf of users. Agents offer great opportunities to significantly enhance the on-chain experience, enabling users to perform complex transactions by talking to chatbots. For now, however, the agent project remains focused on developing the infrastructure and tools to enable easy and fast deployment.
decentralized computing
Overview
Artificial intelligence requires massive computing resources, both for training models and performing inference. Over the past decade, computational demands have grown exponentially as models have become more complex. For example, OpenAI found that between 2012 and 2018, the computational requirements of its models doubled from doubling every two years to doubling every three and a half months. This has led to a surge in demand for GPUs, with some cryptocurrency miners even leveraging their GPUs to provide cloud computing services. As competition for access to computing resources increases and costs rise, several projects are leveraging cryptocurrencies to provide decentralized computing solutions. They offer on-demand computing at competitive prices so teams can afford to train and run models. In some cases, the trade-off may be performance and security.
High-end hardware like state-of-the-art GPUs produced by Nvidia is in high demand. In September, Tether acquired a stake in German Bitcoin miner Northern Data, reportedly paying $420 million to acquire 10,000 H 100 GPUs, one of the most advanced GPUs used for AI training. The wait time to get the best hardware is at least six months, and in many cases longer. To make matters worse, companies are often required to enter into long-term contracts to purchase computing resources that they may not even use. This can lead to a situation where computing resources exist, but are not available on the market. Decentralized computing systems help solve these market inefficiencies by creating a secondary market that enables owners of computing resources to rent out their excess resources at competitive prices at any time, freeing up new supply.
In addition to competitive pricing and accessibility, a key value of decentralized computing is censorship resistance. Cutting-edge artificial intelligence development is increasingly dominated by large technology companies with unparalleled computing power and access to data. One of the key themes highlighted for the first time in the 2023 annual AI Index report is that industry is moving beyond academia to concentrate control in the hands of a small number of technology leaders when it comes to the development of AI models. This has raised concerns about whether they can have outsized influence in shaping the norms and values that underpin AI models, especially after these tech companies push for regulatory measures to limit the development of AI that they have no control over.
Decentralized computing vertical field
Several decentralized computing models have emerged in recent years, each with its own emphases and trade-offs.
general computing
Projects such as Akash, io.net, iExec, and Cudos are decentralized computing applications that, in addition to data and general computing solutions, also provide or will soon provide specific computing resources dedicated to AI training and inference.
Akash is currently the only fully open source super cloud platform. It is a PoS network using the Cosmos SDK. Akash’s native token, AKT, is used to secure the network, serve as a form of payment, and incentivize participation. Akash launched its first mainnet in 2020 with a focus on providing a permissionless cloud computing marketplace, initially offering storage and CPU rental services. In June 2023, Akash launched a new testnet focused on GPU, and launched the GPU mainnet in September, allowing users to rent GPUs for AI training and inference.
There are two main actors in the Akash ecosystem - tenants and providers. Tenants are users of the Akash network who want to purchase computing resources. A provider is a provider of computing resources. To match tenants and providers, Akash relies on a reverse auction process. Tenants submit their computing needs, where they can specify certain conditions, such as the location of servers or the type of hardware to perform the computing, as well as the amount they are willing to pay. Providers then submit their asking prices and the lowest bidder gets the task.
Akash validators maintain the integrity of the network. The validator set is currently limited to 100, with plans to gradually increase it over time. Anyone can become a validator by staking more AKT than the validator currently staking the least amount. AKT holders can also delegate their AKT to validators. The network’s transaction fees and block rewards are distributed in AKT. In addition, for each lease, the Akash network charges a handling fee at a rate determined by the community and distributes it to AKT holders.
Secondary market
The decentralized computing market aims to fill inefficiencies in the existing computing market. Supply constraints lead companies to stockpile more computing resources than they may need, and supply is further constrained due to the form of contracts with cloud service providers. These customers are locked into long-term contracts even though ongoing use may not be required. Decentralized computing platforms unlock new supply, allowing anyone in the world who needs computing resources to become a provider.
Its unclear whether the surge in demand for GPUs for AI training will translate into long-term usage of the Akash network. Akash has long provided a marketplace for CPUs, for example, offering services similar to centralized alternatives at 70-80% discounts. However, the lower price did not lead to significant adoption. Leasing activity on the network has leveled off, with average compute resource utilization in the second quarter of 2023 being just 33%, memory utilization at 16%, and storage utilization at 13%. While these are impressive metrics for on-chain adoption (for reference, leading storage provider Filecoin had a storage utilization rate of 12.6% in Q3 2023), it shows that supply continues to outstrip demand for these products. need.
Its been just over half a year since Akash launched its GPU network, and its too early to accurately measure its long-term adoption. As a sign of demand, GPU utilization has averaged 44% to date, higher than CPU, memory, and storage. This is primarily driven by demand for the highest quality GPUs such as the A 100s, with over 90% of high quality GPUs already leased.
Akashs daily payout has also increased, almost doubling what it was before the advent of GPUs. This is partly due to increased usage of other services, especially CPU, but mostly due to the new GPU.
Pricing is comparable (or in some cases even more expensive) to centralized competitors like Lambda Cloud and Vast.ai. The huge demand for the highest-end GPUs such as the H 100 and A 100 means that most owners of the device have little interest in launching it in a market facing competitive pricing.
Despite initial excitement, there are still barriers to adoption (discussed further below). Decentralized computing networks need to do more to create demand and supply, and the team is experimenting with how to better attract new users. For example, in early 2024, Akash passed Proposal 240 to increase the release of AKT from GPU providers and incentivize more supply, specifically targeting high-end GPUs. The team is also working on launching a proof-of-concept model to demonstrate the real-time capabilities of its network to potential users. Akash is training their own base models and has launched chatbot and image generation products that use Akash GPUs to generate output. Similarly, io.net has developed a stable diffusion model and is rolling out new network capabilities to better simulate the performance and scale of traditional GPU data centers.
Decentralized machine learning training
In addition to general-purpose computing platforms that can meet the needs of artificial intelligence, a series of dedicated AI GPU providers have emerged that focus on machine learning model training. For example, Gensyn is coordinating power and hardware to build collective intelligence, arguing that if someone wants to train something, and someone is willing to train it, then training should be allowed.
The protocol has four main roles: submitter, solver, verifier, and whistleblower. Submitters submit tasks with training requests to the network. These tasks include the training objectives, the model to be trained, and the training data. As part of the submission process, submitters pay an upfront fee to cover the solvers estimated computational costs.
Once submitted, the task is assigned to solvers, who will perform the actual training of the model. The solver then submits the completed task to the verifier, who is responsible for checking that the training was completed correctly. Whistleblowers are responsible for ensuring that validators act honestly. To incentivize whistleblowers to participate in the network, Gensyn plans to regularly provide proof of intentional bugs to reward whistleblowers for catching them.
In addition to providing computing for AI-related work, Gensyns key value is its verification system, which is still under development. Validation is necessary to ensure that the external calculations performed by the GPU provider are correct (i.e., to ensure that the users model is trained the way they want). Gensyn takes a unique approach to solving this problem, leveraging a new verification method called probabilistic learning proofs, graph-based positioning protocols, and Truebit-style incentive games. This is an optimistic solving mode that enables the verifier to confirm that the solver has run the models correctly without having to completely rerun them, which is an expensive and inefficient process.
In addition to its innovative verification approach, Gensyn also claims to be cost-effective relative to centralized alternatives and crypto competitors - offering ML training up to 80% cheaper than AWS, while outperforming similar project Truebit in tests.
It remains to be seen whether these preliminary results can be replicated at scale in decentralized networks. Gensyn hopes to leverage excess computing resources from providers such as small data centers, regular users, and even in the future small mobile devices such as cell phones. However, as the Gensyn team themselves admit, relying on heterogeneous compute providers introduces several new challenges.
For centralized providers like Google Cloud and Coreweave, computation is expensive, while communication (bandwidth and latency) between computations is cheap. These systems are designed to enable communication between hardware as quickly as possible. Gensyn flips this framework on its head, lowering computational costs by enabling anyone in the world to provision a GPU, but increasing communication as the network must now coordinate computing jobs across decentralized, heterogeneous hardware located in distant locations cost. Gensyn isn’t out yet, but it demonstrates what’s possible when building decentralized machine learning training protocols.
Decentralized general artificial intelligence
Decentralized computing platforms also open the door to design possibilities in how artificial intelligence is created. Bittensor is a decentralized computing protocol built on Substrate that attempts to answer the question How do we transform artificial intelligence into a collaborative approach? Bittensor aims to decentralize and commoditize artificial intelligence generation. Launched in 2021, the protocol aims to harness the power of collaborative machine learning models to continuously iterate and produce better artificial intelligence.
Bittensor draws inspiration from Bitcoin. The supply of its native currency TAO is 21 million and has a four-year halving cycle (the first halving will be in 2025). Instead of using proof of work to generate correct random numbers and obtain block rewards, Bittensor relies on Proof of Intelligence, requiring miners to run models that can produce output for inference requests.
Intelligent Incentive Network
Bittensor initially relied on Mix of Experts (MoE) models to generate output. When an inference request is submitted, instead of relying on a general model, the MoE model passes the inference request to the model that is most accurate for the specific input type. It can be compared to when building a house, hiring various experts to handle different aspects of the construction process (eg: architects, engineers, painters, construction workers, etc...). MoE applies this to machine learning models, trying to leverage the output of different models depending on the input. As Bittensor founder Ala Shaabana explains, its like talking to a group of smart people instead of one person to get the best answer. Due to the challenges of ensuring correct routing, synchronizing messages to the correct model, and motivation, this approach has been put on hold until the project is more mature.
There are two main roles in the Bittensor network: validators and miners. Validators are responsible for sending inference requests to miners, reviewing their outputs, and ranking them based on the quality of their responses. To ensure that their ranking is reliable, a validator is assigned a vtrust score based on how consistent its ranking is with other validators rankings. The higher the vtrust score of a validator, the more TAO they are able to obtain. This is intended to encourage validators to reach consensus on model rankings over time, as the more validators that reach consensus on model rankings, the higher their individual vtrust scores will be.
Miners, also known as servers, are network participants who run actual machine learning models. Miners compete with each other to provide the most accurate output for a given query, and the more accurate the output, the more TAO issuance they receive. Miners can generate these outputs in any way they want. For example, in future scenarios, it is entirely possible for a Bittensor miner to train a model on Gensyn in advance and then use it to earn TAO.
Most interactions today occur directly between validators and miners. Validators submit inputs to miners and request outputs (i.e. train the model). Once the validators query the miners in the network and receive their responses, they then rank the validators and submit their rankings to the network.
This interaction between validators (relying on PoS) and miners (relying on Proof of Model, a form of PoW) is called Yuma consensus. It is designed to encourage miners to produce the best output to earn TAO, and to encourage validators to accurately rank miner output to earn higher vtrust scores and increase their TAO rewards, forming the consensus mechanism of the network.
Subnets and applications
As mentioned above, interactions on Bittensor mainly include validators submitting requests to miners and evaluating their outputs. However, as the quality of contributing miners improves and the networks overall artificial intelligence grows, Bittensor will create an application layer on top of its existing stack, enabling developers to build applications that query the Bittensor network.
In October 2023, with its Revolution upgrade, Bittensor completed an important step towards achieving this goal, introducing subnets. A subnet is an independent network on Bittensor that encourages specific behaviors. Revolution opens the network to anyone interested in creating a subnet. In the months since launch, more than 32 subnets have been launched, including ones for text prompts, data scraping, image generation and storage, and more. As the subnet matures and is product-ready, the subnet creator will also create application integrations that enable teams to build applications that query specific subnets. Some applications (chatbots, image generators, tweet reply bots, prediction markets) already exist today, but aside from grants from the Bittensor Foundation, there is no formal incentive for validators to accept and forward these queries.
To provide a clearer explanation, the image below is an example of what a Bittensor integrated application might run.
Subnets receive TAO based on the performance evaluated by the root network. The root network sits on top of all subnets and essentially acts as a special type of subnet and is managed by the 64 largest subnet validators on a share basis. Root network validators rank subnets based on their performance and regularly allocate TAOs to subnets. In this way, each subnet acts as a miner for the root network.
Bittensor prospects
Bittensor is still going through growing pains as it expands the protocol capabilities to incentivize the generation of intelligence across multiple subnets. Miners continue to come up with new ways to attack the network to earn more TAO, such as by slightly modifying the output of high-rated inferences run by their models and then submitting multiple variations. Governance proposals that affect the entire network can only be submitted and implemented by Triumvirate, which is composed entirely of Opentensor Foundation stakeholders (it is worth noting that proposals require approval by Bittensor validators before implementation). The projects token economics are being improved to improve incentives for TAO usage within the subnet. The project also quickly gained popularity for its unique approach, and the CEO of HuggingFace, the most popular artificial intelligence website, said that Bittensor should add its resources to the website.
In a recent article titled Bittensor Paradigm published by core developers, the team spelled out their vision for Bittensor to eventually evolve to be agnostic to what is being measured. In theory, this could allow Bittensor to develop subnets that incentivize any type of behavior, all powered by TAO. However, considerable practical constraints remain - the main one being to prove that these networks can scale to handle such a diverse range of processes, and that the underlying incentives drive progress beyond what centralization can provide.
Build a decentralized computing stack for AI models
The above sections set out the framework for various types of decentralized artificial intelligence computing protocols being developed. While they are still in the early stages of development and adoption, they provide the foundation for an ecosystem that may ultimately facilitate the creation of “AI building blocks,” much like the “DeFi Lego” concept. The composability of permissionless blockchains opens up the possibility for each protocol to build on top of the others, providing a more comprehensive decentralized AI ecosystem.
For example, heres one way Akash, Gensyn, and Bittensor might interact with each other to respond to inference requests.
To be clear, this is just an example of what could happen in the future, and not a reflection on the current ecosystem, existing partners, or possible outcomes. Today, interoperability limitations and other considerations described below significantly limit integration possibilities. In addition to this, the fragmentation of liquidity and the need to use multiple tokens may adversely affect the user experience, which has been pointed out by Akash and the founders of Bittensor.
Other decentralized products
In addition to computing, there are several other decentralized infrastructure services to support cryptocurrency’s emerging artificial intelligence ecosystem. It is beyond the scope of this report to list them all, but some interesting and representative examples include:
Ocean: a decentralized data marketplace. Users can create data NFTs that represent their data and purchase them using data tokens. Users can both monetize and take greater ownership of their data, while providing teams working on AI development and training models with the access to data they need.
Grass: A decentralized bandwidth marketplace. Users can sell their excess bandwidth to artificial intelligence companies that use it to scrape data from the internet. The marketplace is built on the Wynd Network, which not only allows individuals to monetize their bandwidth, but also provides buyers of bandwidth with a more diverse perspective on what individual users see online (because individuals use the Internet to access Usually for its specific IP address).
HiveMapper: Build a decentralized mapping product containing information collected from car drivers. HiveMapper relies on artificial intelligence to interpret images collected by users’ car dashboard cameras and rewards users for helping improve the AI model through Reinforced Human Learning Feedback (RHLF).
Collectively, these point to near-endless opportunities to explore decentralized market models that support AI models, or to support the surrounding infrastructure required to develop these models. Currently, most of these projects are in the proof-of-concept stage and require more research and development to prove that they can deliver comprehensive AI services at the required scale.
Outlook
Decentralized computing products are still in the early stages of development. They are just beginning to use state-of-the-art computing power to train the most powerful AI models in production. To gain meaningful market share, they need to demonstrate real advantages over centralized alternatives. Potential triggers for wider adoption include:
GPU supply and demand. Short supply of GPUs, combined with rapidly growing computing demands, has led to a race for GPUs. OpenAI has limited the use of its platform due to limited GPUs. Platforms like Akash and Gensyn can provide cost-competitive alternatives for teams requiring high-performance computing. The next 6-12 months are a unique opportunity for decentralized computing providers to attract new users as these users are forced to consider decentralized solutions. Coupled with increasingly efficient open source models (such as Metas LLaMA 2), users no longer face the same obstacles in deploying effective fine-tuned models, which makes computing resources a major bottleneck. However, the existence of the platform itself cannot ensure sufficient supply of computing resources and corresponding demand from consumers. Acquiring high-end GPUs remains difficult, and cost isnt always the primary motivator on the demand side. These platforms will be challenged to demonstrate the real benefits of using decentralized computing - whether because of cost, censorship resistance, duration and resiliency or usability - in order to accumulate sticky users. These agreements then have to move quickly. The rate at which GPU infrastructure is being invested and built is staggering.
Supervision. Regulation continues to be a major obstacle to the decentralized computing movement. In the short term, the lack of clear regulation means providers and users face potential risks in using these services. What happens if a provider inadvertently provides calculations or a buyer purchases calculations from a sanctioned entity? Users may be reluctant to use decentralized platforms that lack control and oversight from a centralized entity. Protocols have tried to mitigate these concerns by incorporating controls into their platforms or providing filters for only known computing providers (i.e. providing KYC information), but a more robust approach to protecting privacy while ensuring compliance is needed to promote adoption. In the short term, we may see the emergence of KYC and regulatory compliant platforms that limit the use of their protocols to address these issues.
review. Regulation works both ways, and decentralized computing providers may benefit from actions to limit the use of artificial intelligence. In addition to the executive order, OpenAI founder Sam Altman has testified before Congress, emphasizing the need for regulatory agencies that issue licenses for artificial intelligence development. The discussion around AI regulation is just beginning, but any attempt to limit the use of AI or censor it could accelerate the adoption of decentralized platforms without these barriers. OpenAIs leadership changes last November provided further evidence of the risks of giving decision-making power to the most powerful existing AI models to just a few people. Furthermore, all AI models necessarily reflect the biases of their creators, whether intentional or not. One way to eliminate these biases is to make models as open as possible to fine-tuning and training, ensuring that anyone, anywhere, can use models with various biases.
Data Privacy. When integrated with external data and privacy solutions that provide users with data autonomy, decentralized computing may become more attractive than the alternatives. Samsung suffered from this problem when they realized engineers were using ChatGPT to help with chip design and leaked sensitive information to ChatGPT. Phala Network and iExec claim to provide users with SGX security isolation zones to protect user data, and are conducting research on fully homomorphic encryption to further unlock decentralized computing that ensures privacy. As AI becomes further integrated into our lives, users will place greater value on being able to run models on applications with privacy-preserving features. Users will also demand the ability to implement data composability so that they can seamlessly move their data from one model to another.
User experience (UX). User experience remains a significant barrier to wider adoption of crypto applications and infrastructure of all types. Decentralized computing solutions are no exception, and in some cases this is exacerbated by the need for developers to understand cryptography and artificial intelligence. Areas that need improvement range from the abstraction of onboarding and interacting with the blockchain to providing the same high-quality output as current market leaders. This is obvious because many of the running decentralized computing protocols offer cheaper solutions but are difficult to gain regular use.
Smart contracts and zero-knowledge machine learning (zkML)
Smart contracts are one of the core of any blockchain ecosystem. Under certain conditions, they automate and reduce or eliminate the need for trusted third parties, enabling the creation of complex decentralized applications like those seen in DeFi. For now, however, smart contracts are still limited in functionality, as they execute based on preset parameters, which must be updated.
For example, a lending protocol smart contract regulates when positions are liquidated based on a certain loan-to-value ratio. In a dynamic environment where risks are constantly changing, these smart contracts must be constantly updated to take into account changes in risk tolerance, which creates challenges for contracts managed through decentralized processes. For example, a DAO that relies on decentralized governance processes may not be able to respond to systemic risks in a timely manner.
Smart contracts that integrate artificial intelligence (such as machine learning models) are one possible way to enhance functionality, security, and efficiency while improving the overall user experience. However, these integrations also introduce additional risks, as there is no way to ensure that the models supporting these smart contracts cannot be attacked or account for long-tail scenarios (models are notoriously difficult to train due to lack of data input).
Zero-knowledge machine learning (zkML)
Machine learning requires a lot of computing resources to run complex models, which makes it impossible for AI models to be run directly inside smart contracts because of the high cost. For example, a DeFi protocol provides users with the functionality of a revenue optimization model, but if they try to run the model on the chain, they must pay high gas fees. One solution is to increase the computing power of the underlying blockchain. However, this would also increase the burden on the chain’s validating nodes, potentially weakening its decentralized properties. As a result, several projects are exploring ways to use zkML to verify outputs in a permissionless trustless manner without requiring intensive on-chain computation.
A common example illustrating the usefulness of zkML is when a user needs someone else to run data through a model and verify that their counterparty is actually running the correct model. Perhaps developers are using a decentralized computing provider to train their models and are concerned that the provider is trying to save costs by using a cheaper model, but the output is barely noticeable. zkML enables compute providers to run data through their model and then generate a proof that can be verified on-chain that the model’s output is correct for a given input. In this case, model providers will have the added advantage of being able to provide their models without revealing the underlying weights that produce the output.
The opposite can also happen. If a user wants to run a model that uses their data, but does not want the project providing the model to have access to their data because of privacy concerns (such as medical examinations or proprietary business information), then the user can run it on their data model without sharing it and verify that they ran the correct model while providing evidence. These possibilities significantly expand the design space for integrating artificial intelligence and smart contract capabilities by solving restrictive computational constraints.
Infrastructure and tools
Given the early state of the zkML field, development is primarily focused on building the infrastructure and tools needed for teams to transform their models and outputs into proofs that can be verified on-chain. These products abstract the zero-knowledge aspects of development as much as possible.
Two projects, EZKL and Giza, build on these tools by providing verifiable proofs of machine learning model execution. Both help teams build machine learning models to ensure those models can perform in a way that results can be verified on-chain without trust. Both projects use Open Neural Network Exchange (ONNX) to convert machine learning models written in common languages such as TensorFlow and Pytorch into a standard format. Then, upon execution they output versions of these models, also producing zk-proofs. EZKL is open source and generates zk-SNARKS, while Giza is closed source and generates zk-STARKS. These two projects are currently only compatible with EVM.
Over the past few months, EZKL has made significant progress in enhancing its zkML solution, focusing primarily on reducing costs, improving security, and accelerating proof generation. For example, in November 2023, EZKL integrated a new open source GPU library that reduced overall proof times by 35%, and in January, EZKL announced Lilith, a software solution for proofs using EZKL The system integrates high-performance computing clusters and coordinates concurrent jobs. Giza is unique in that, in addition to providing tools for creating verifiable machine learning models, they plan to implement a web3 equivalent of Hugging Face, open a user marketplace for zkML collaboration and model sharing, and eventually integrate decentralized computing product. In January, EZKL released a benchmark evaluation comparing the performance of EZKL, Giza, and RiscZero (discussed below). EZKL exhibits faster proof times and memory usage.
Modulus Labs is also developing a new zk-proof technology tailored for artificial intelligence models. Modulus published a paper called The Cost of Intelligence (implying the extremely high cost of running AI models on the chain), benchmarking the existing zk-proofs system at the time to identify improved AI model zk -proofs capabilities and bottlenecks. The paper, published in January 2023, shows that existing solutions are too costly and efficient to enable large-scale AI applications. Building on their initial research, Modulus launched Remainder in November, a professional zero-knowledge prover built specifically to reduce AI model costs and proof times, aiming to make it economically feasible for projects to integrate models into their systems at scale. in the smart contract. Their work is closed source and therefore cannot be benchmarked against the above solutions, but they were recently mentioned in Vitaliks blog post about cryptography and artificial intelligence.
The development of tools and infrastructure is critical to the future growth of the zkML field, as it greatly reduces the friction required for teams to deploy zk circuits for verifiable off-chain computation. Creating secure interfaces that enable non-crypto-native developers working in the field of machine learning to bring their models on-chain will enable increased experimentation with applications with truly novel use cases. The tools also address a major barrier to broader zkML adoption, which is the lack of developers with knowledge and interest in the intersection of zero-knowledge, machine learning, and cryptography.
coprocessor
Other solutions under development, called coprocessors, include RiscZero, Axiom and Ritual. The term coprocessor is mostly semantic - these networks take on many different roles, including on-chain verification of off-chain computations. Like EZKL, Giza, and Modulus, their goal is to completely abstract the zero-knowledge proof generation process, thereby creating essentially a zero-knowledge virtual machine capable of executing programs off-chain and generating proofs for on-chain verification. RiscZero and Axiom can handle simple AI models because they are more general coprocessors, while Ritual is built specifically for use with AI models.
Infernet is the first instance of Ritual and includes an Infernet SDK that allows developers to submit inference requests to the network and optionally receive output and proofs upon return. Infernet nodes receive these requests, process computations off-chain, and return output. For example, a DAO could create a process to ensure that all new governance proposals meet certain prerequisites before being submitted. Each time a new proposal is submitted, the governance contract triggers an inference request through Infernet, calling a DAO-specific governance-trained AI model. The model reviews the proposal to ensure that all necessary conditions have been submitted and returns an output and certification that either approves or denies the proposals submission.
Over the next year, the Ritual team plans to roll out additional features that make up the infrastructure layer, called Ritual Superchain. Many of the items discussed previously can be plugged into Ritual as service providers. Currently, the Ritual team has integrated with EZKL for proof generation and may soon add functionality from other leading providers. Infernet nodes on Ritual can also use Akash or io.net GPUs and query models trained on the Bittensor subnet. Their ultimate goal is to become the go-to provider of open AI infrastructure capable of serving machine learning and other AI-related tasks for any job on any network.
application
zkML helps to reconcile the contradiction between blockchain and artificial intelligence. The former is inherently resource-constrained and the latter requires large amounts of computing and data. As one of the founders of Giza said: The use cases are so rich... Its a bit like asking what use cases are there for smart contracts in the early days of Ethereum... We are expanding the use cases for smart contracts. However, as mentioned above It is emphasized that todays development is mainly focused on the tool and infrastructure level. The application is still in the exploratory stage, and the challenge for the team is to demonstrate that the value of implementing the model using zkML outweighs the complexity and cost of doing so.
Some applications today include:
DeFi. zkML expands DeFi by enhancing the functionality of smart contracts. DeFi protocols provide large amounts of verifiable and immutable data for machine learning models, which can be used to generate yield or trading strategies, risk analysis, UX, and more. For example, Giza partnered with Yearn Finance to build a proof-of-concept automated risk assessment engine for Yearns new v3 vault. Modulus Labs partnered with Lyra Finance to incorporate machine learning into its AMMs, partnered with Ion Protocol for validator risk models, and helped Upshot validate its AI-based NFT price data feed. Protocols like NOYA (which leverages EZKL) and Mozaic offer access to proprietary off-chain models, enabling users to tap into automated APY gun pools while being able to verify data inputs and proofs on-chain. Spectral Finance is building an on-chain credit scoring engine to predict the likelihood of a Compound or Aave borrower defaulting. These so-called De-Ai-Fi products are expected to become more common in the coming years, thanks to zkML.
game. Blockchain has long been considered ripe for disrupting and enhancing gaming. zkML makes it possible to use artificial intelligence for gaming on-chain. Modulus Labs has implemented a proof-of-concept for a simple on-chain game. Leela vs the World is a game theory chess game where users compete against an AI chess model. zkML verifies that every step of Leela is running based on the model used in the game. Similarly, the team built a simple singing competition and on-chain tic-tac-toe using the EZKL framework. Cartridge is leveraging Giza to enable teams to deploy games entirely on-chain, recently launching a simple AI driving game in which users compete to create better models that enable cars to avoid obstacles. Although simple, these proofs of concept point to future implementations of more complex on-chain verification, such as game economic interactions with advanced NPC characters in AI Arena, a game similar to Super Mario , players train their own warriors and then deploy them as AI models to fight.
Identity, traceability and privacy. Encryption is already being used as a means to verify authenticity and combat the growing number of AI-generated/manipulated content and deepfakes. zkML can advance these efforts. WorldCoin is a personality proof solution that requires users to scan their irises to generate a unique ID. In the future, biometric IDs could be self-stored on personal devices by using encryption, with the required model used to locally verify these biometric information. Users can provide proof of their biometric information without revealing their identity, thus resisting Sybil attacks while ensuring privacy. This can also be applied to other inferences that require privacy, such as using models to analyze medical data/images to detect disease, verify individual identities and develop matching algorithms in dating apps, or for insurance and lending institutions that require verification of financial information.
Outlook
zkML is still in the experimental stage, with most projects focused on building infrastructure prototypes and proof-of-concepts. Current challenges include computational cost, memory limitations, model complexity, limited tools and infrastructure, and development talent. In short, there is still a lot of work to be done before zkML can achieve the scale required for consumer products.
However, as the field matures and these limitations are addressed, zkML will become a key component of AI and cryptography integration. At its core, zkML promises the ability to bring off-chain computation on-chain at any scale while maintaining the same or close to the same security guarantees as running computation on-chain. However, until this vision is realized, early adopters of the technology will continue to have to weigh the privacy and security of zkML against the efficiency of alternatives.
artificial intelligence agent
One of the most exciting integrations of artificial intelligence and cryptocurrency is the ongoing experimentation with artificial intelligence agents. Agents are autonomous robots capable of receiving, interpreting and executing tasks, using AI models. An agent can be anything from having a personal assistant that is always available and optimized according to your preferences, to hiring a financial agent to manage and adjust investment portfolios according to the users risk appetite.
Proxies and crypto go together well, as crypto provides a permissionless and trustless payment infrastructure. Once training is complete, agents can have a wallet so that they can transact with smart contracts on their own. For example, today simple agents can search for information on the Internet and then trade on prediction markets based on a model.
Agency provider
Morpheus is one of the latest open source proxy projects launched in 2024 on Ethereum and Arbitrum. Its white paper was published anonymously in September 2023, providing the basis for a community to form and build (including well-known figures like Erik Vorhees). The white paper includes the downloadable Smart Agent Protocol, an open source LLM that can be run locally, managed by the user’s wallet, and interact with smart contracts. It uses smart contract rankings to help agents determine which smart contracts are safe to interact with based on criteria such as the number of transactions processed.
The white paper also provides a framework for building the Morpheus network, such as the incentive structure and infrastructure required to implement the smart agent protocol. This includes incentivizing contributors to build interfaces for front-ends that interact with agents, providing APIs for developers to build applications that can plug into agents so that they interact with each other, and providing cloud solutions for users so that they can use the capabilities of running agents on edge devices. required computation and storage. Initial funding for the project was initiated in early Q224, with the full protocol expected to be launched at that time.
Decentralized Autonomous Infrastructure Network (DAIN) is a new agent infrastructure protocol that builds an agent-to-agent economy on Solana. The goal of DAIN is to allow agents from different enterprises to seamlessly interact with each other through a common API, thereby greatly opening up the design space for AI agents, with a focus on implementing agents that can interact with web2 and web3 products. In January, DAIN announced its first partnership with Asset Shield to enable users to add proxy signers to their multi-signatures, who are able to interpret transactions and approve/reject according to rules set by the user.
Fetch.AI is one of the first deployed artificial intelligence agent protocols and has developed an ecosystem for building, deploying and using agents on-chain using its FET token and Fetch.AI wallet. The protocol provides a comprehensive set of tools and applications for working with proxies, including in-wallet functionality to interact with proxies and issue commands.
Founded by former members of the Fetch team, Autonolas is an open marketplace for creating and using decentralized artificial intelligence agents. Autonolas also provides developers with a set of tools for building off-chain hosted AI agents with the ability to connect to multiple chains including Polygon, Ethereum, Gnosis Chain and Solana. They currently have a number of active proxy proof-of-concept products, including products for prediction markets and DAO governance.
SingularityNet is building a decentralized marketplace for AI agents, where people can deploy AI agents that focus on specific areas, and these agents can be hired by other people or agents to perform complex tasks. Other projects, such as AlteredStateMachine, are building AI agent integrations with NFTs. Users mint NFTs with random properties that give them advantages and disadvantages for different tasks. These agents can then be trained to enhance certain properties, used in gaming, DeFi or as virtual assistants and conduct transactions with other users.
Collectively, these projects envision a future ecosystem of agents that work together to not only perform tasks but also help build artificial general intelligence. A truly sophisticated agent will be able to autonomously perform any user task. For example, instead of ensuring that an agent has integrated an external API (such as a travel booking website) in order to use it, a fully autonomous agent would have the ability to figure out how to hire another agent to integrate the API and then perform the task. From the users perspective, there is no need to check whether the agent is able to perform the task, since the agent can determine that on its own.
Bitcoin and AI Agents
In July 2023, Lightning Labs launched a proof-of-concept solution for using proxies on the Lightning Network, called the LangChain Bitcoin Suite. This product is particularly interesting because it aims to solve a growing problem in the Web 2 world - gating (restricting access to) web applications and expensive API services.
LangChain solves this problem by providing developers with a set of tools that enable agents to buy, sell, and hold Bitcoin, as well as query API keys and send micropayments. On traditional payment channels, small micropayments are basically unfeasible due to fee issues, whereas on the Lightning Network, agents can send unlimited micropayments every day and only pay minimal fees. When combined with LangChains L 402 payment metering API framework, this allows companies to adjust usage fees for their API based on increases and decreases in usage, rather than setting a single cost bar.
In a future where on-chain activity is dominated by agent-to-agent interactions, the things mentioned above will be necessary to ensure that agents can interact with each other in a way that is not prohibitively expensive. This is an early example of how agents can be used on a permissionless and cost-effective payment rail, opening up possibilities for new markets and economic interactions.
Outlook
The agency space is still in its infancy.
The project is just starting to roll out functional agents that can handle simple tasks using its infrastructure - something typically only available to experienced developers and users.
However, one of the biggest impacts that AI agents will have on the crypto space over time is to improve user experience across all verticals. Transactions will begin to shift from click-based to text-based, and users will be able to interact with on-chain agents through large language models (LLMs). There are already teams like Dawn Wallet that have launched chatbot wallets for users to interact on-chain.
Furthermore, it is unclear how proxies will work in Web 2.0, as financial rails rely on regulated banking institutions that cannot operate 24/7 and enable seamless cross-border transactions. As Lyn Alden highlights, crypto channels are particularly attractive compared to credit cards because of the lack of refunds and the ability to process small transactions. However, if agents become more common, existing payment providers and applications may move quickly to implement the infrastructure required to enable them to operate on existing financial channels, mitigating some of the benefits of using crypto .
Currently, agents may be limited to deterministic cryptocurrency transactions, where a given input guarantees a given output. Models of how to leverage the capabilities of these agents to perform complex tasks, as well as tools to expand the range of tasks they can accomplish, require further development. For crypto proxies to become useful beyond novel on-chain crypto use cases, wider integration and acceptance of crypto as a form of payment as well as regulatory clarity will be needed. However, as these components evolve, agents will become one of the largest consumers of the decentralized computing and zkML solutions discussed above, receiving and solving any task in an autonomous, non-deterministic manner.
in conclusion
Artificial intelligence brings the same innovations to crypto that we already saw in Web2, enhancing everything from infrastructure development to user experience and usability. However, the project is still in its early stages, and in the short term, the integration of crypto and AI will be dominated by off-chain integration.
Products like Copilot will increase development efficiency by 10 times, and in partnership with major companies such as Microsoft, Layer 1 and DeFi applications are already launching AI-assisted development platforms. Companies like Cub 3.ai and Test Machine are developing artificial intelligence for smart contract auditing and real-time threat monitoring to enhance on-chain security. The LLM chatbot is being trained using on-chain data, protocol files and applications to provide users with enhanced usability and user experience.
For more advanced integrations, to truly leverage crypto’s underlying technology, the challenge is to prove that implementing AI solutions on-chain is technically feasible and economically feasible at scale. Developments in decentralized computing, zkML, and AI agents all point to promising verticals, setting the stage for a future in which crypto and AI are deeply intertwined.


