Polyhedra introduces Trusted Execution Environment (TEE) to strengthen cross-chain and verifiable AI security

This article is approximately 4197 words,and reading the entire article takes about 6 minutes
Covers cross-chain interoperable systems between multiple chains.

Author: Weikeng Chen

Original link: https://blog.polyhedra.network/tee-in-polyhedra/

Polyhedra is introducing a new layer of security for its cross-chain bridging protocol, oracle system, and verifiable AI market, relying on Google Confidential Computing technology to implement a trusted execution environment (TEE). After extensive research on current mainstream TEE solutions, Polyhedra chose to build a TEE security module based on Google Confidential Space , and took the lead in verifying its new proof mechanism of zero-knowledge-TEE combination (ZK-TEE) - the calculation results running on Google Cloud can be verified by the EVM chain end, opening up a new path for trusted computing and blockchain native interoperability.

This security layer will be gradually deployed to a number of Polyhedras core products based on zero knowledge, covering cross-chain interoperable systems between multiple chains. At the same time, Polyhedra also plans to natively integrate TEE proof capabilities and TEE security-protected AI applications into its self-developed EXPchain in the form of precompiled contracts.

What is TEE?

TEE, short for Trusted Execution Environment, is a CPU technology that allows the CPU to perform operations in encrypted and integrity-protected memory — data that cannot be viewed by cloud service providers (such as Google Cloud), operating systems, or even other systems in the same virtual machine environment.

In other words, TEE can ensure the confidentiality and security of data during use at the hardware level.

This technology has actually been widely used. For example, Apple devices have full disk encryption (also called data protection ) enabled by default, which is implemented based on the TEE on Apple chips. Only when the user unlocks the device with a fingerprint or password can sensitive information such as passwords and keys stored in the device be accessed. The same is true for Microsofts Windows system. In recent versions, full disk encryption ( BitLocker ) supported by TEE is supported. In this way, the disk will only be unlocked if the operating system and the startup process have not been tampered with.

Polyhedras TEE technology vision: building a secure and trusted next-generation Internet infrastructure

Since last year, Polyhedra has been focusing on the three core dimensions of security, credibility and verifiability of cross-chain interoperability and AI. We are advancing the development of multiple products, some of which have been officially released. In general, Polyhedras core focus covers three key directions:

  • Cross-chain bridging protocol

  • ZKML and Verifiable AI Agents

  • Verifiable AI marketplace, including MCP server (Multi-Party Collaborative Reasoning)

Security has always been Polyhedras primary goal and the original intention of the founding team to form the Polyhedra Network. We have achieved verifiability of the underlying consensus mechanism through deVirgo, including full consensus verification of Ethereum .

At the same time, most of the chains supported by Polyhedra zkBridge use BFT-style PoS consensus, and the verification difficulty of these systems is relatively lower. However, while ensuring the security of the system, we also realize that the introduction of a trusted execution environment (TEE) is crucial to improving the user experience. TEE can achieve lower costs, faster final confirmation time, stronger non-chain interoperability, and higher data privacy protection, providing an important supplement to our product system. TEE will become a key part of our security architecture and an accelerator for the future development of cross-chain and AI.

Lower costs: Polyhedra’s cost reduction strategy in ZK systems

Polyhedra has been committed to reducing the cost of cross-chain bridging through a variety of technical paths. This cost mainly comes from the generation and verification of zero-knowledge proofs on the destination chain. The verification costs of different blockchains vary greatly, and Ethereums verification fees are usually high. In the current network operation, Polyhedra mainly optimizes costs through the batch processing mechanism. In zkBridge, the core step block synchronization is not executed for each block, but is performed uniformly every few blocks. The block interval will be dynamically adjusted according to the activity on the chain, thereby effectively reducing the overall cost.

However, during some quiet periods (such as the early morning of a chain), the user may be the only one to initiate a cross-chain operation. In order to prevent such users from waiting too long, zkBridge will directly trigger synchronization, generate proofs, and complete verification, which will incur additional costs. These costs are sometimes borne by the users themselves, or they may be shared with the transaction fees of other users. For large cross-chain transactions, proof costs are almost unavoidable to ensure security. But for small transactions, we are exploring a new mechanism: Polyhedra prepays liquidity, bears part of the risk within a controllable range, and brings users a faster and cheaper cross-chain experience.

In addition to the cross-chain bridge, Polyhedra is also continuously optimizing the generation and verification costs of zkML. Its flagship product, the Expander library, has been widely used by other teams in ZK machine learning scenarios, and has made significant progress in vectorization, parallel computing, and GPU acceleration. In addition, its proof system has also undergone multiple rounds of optimization, which has greatly reduced the proof generation overhead. In terms of on-chain verification, Polyhedra is deploying precompiled modules for zkML verification in its autonomous public chain EXPchain. This feature is expected to be launched in the next phase of the test network, thereby achieving efficient verification of zkML proofs, and plans to promote it to more blockchain ecosystems.

Although Polyhedra has achieved proofs for the Llama model with a parameter scale of up to 8 billion, the current generation process is not yet instant. For larger models, especially those for image or video generation, proof generation time is still long. Polyhedra focuses on building AI agent systems and running models under an optimistic execution architecture - if users encounter malicious operations, they can file a complaint to the chain and punish the operator through zkML proofs, without having to generate proofs every time, and only use them when challenging. The cost of proof is relatively acceptable, but the agent operator needs to lock in a certain amount of capital as insurance, which creates pressure on capital occupation.

Therefore, for users who run very large models, have higher security requirements or expect lower fees, introducing another layer of security mechanism (such as TEE) will be very critical. TEE can not only be used to ensure the credibility of on-chain AI applications (such as trading robots), but also technically improve the systems anti-attack capabilities, thereby reducing the size of the required insurance funds.

A quick finality: dealing with the challenge of Rollup on-chain delays

Polyhedra is also continuing to advance the Fast Finality capability, especially to solve the problem of long settlement cycles for some Rollups on Ethereum L1. Because it relies on the state consensus of Ethereum L1 to inherit its security, finality delays will affect user experience and interaction efficiency. This problem is particularly evident in optimistic Rollups (such as Optimism and Arbitrum), whose withdrawal periods are usually as long as 7 days, which is obviously difficult to meet the real-time needs of most users. In zkRollup, despite stronger security, many projects still adopt delayed batch submission solutions ranging from every 30 minutes to 10-12 hours, which also causes certain delays.

To address the issue of cross-chain interoperability, Polyhedra uses a State Committee mechanism combined with zero-knowledge proofs (ZK proofs) in its integration with Arbitrum and Optimism. The same technology has also been deployed on opBNB . The solution runs the full node clients of these Rollups through multiple nodes, and its main task is to obtain the latest block data from the official RPC API. Where possible, we introduce RPC diversity to enhance the security and availability of the system. Each machine will sign the events in the bridge contract that will be transmitted across the chain, and finally aggregate multiple signatures into a ZK proof that can be verified on the chain. The reason for adopting the signature aggregation design is to support the participation of more verification nodes and improve the degree of decentralization.

The state committee system has been running stably for about a year. However, it should be noted that the ZK aggregate signature generated by the state committee is not as secure as the full ZK proof generated for the entire consensus process. Therefore, we have restricted this scheme in the fast confirmation mechanism: it is only applicable to cross-chain transfers of small assets; for large assets, Polyhedra recommends that users use the official L2 to L1 bridge channel for stronger security guarantees.

In ZKML scenarios, especially those that require instant execution (such as AI trading robots), achieving a quick finality is particularly critical. To this end, Polyhedra is exploring the introduction of TEE (Trusted Execution Environment) as a solution in its verifiable AI technology stack, so that the AI reasoning process runs in a computing environment with TEE, ensuring the credibility of data and the verifiability of execution results.

We plan to use Google Vertex AIs model library to prove that the output of a model is indeed derived from Vertex AI s API call, or to prove through TEE that the result comes from the official ChatGPT or DeepSeek API service. Although this requires a certain degree of trust in the platform (such as Google, OpenAI), we believe this is an acceptable engineering assumption, especially when used with ZKML, which is purely calculated on the chain.

If users want to run custom models, we can also deploy them in Nvidia GPU instances that support TEE ( Google has recently supported this ). This mechanism can be used in parallel with ZKML proofs: ZK proofs can be generated when the system is challenged, or delayed as a supplementary insurance mechanism. For example, in an insurance mechanism for AI trading robots or agents, the operator can generate ZKML proofs before the insurance limit is reached to release the security deposit, thereby increasing the transaction throughput of the agent system and enabling it to handle more tasks under the original insurance limit.

Non-blockchain interoperability: a trusted channel connecting the chain and the real world

Polyhedra has been exploring the application of zero-knowledge proof (ZKP) to non-blockchain scenarios. Representative cases include the reserve proof system of centralized exchanges (CEX), which achieves auditability by verifying the privacy protection of the database. In addition, we are also actively promoting interoperability between chains and off-chain systems, such as providing trusted oracles of traditional financial asset prices such as stocks, gold, and silver for AI trading robots and real assets (RWA), or achieving on-chain identity authentication through social methods such as Google login and Auth 0 login.

Off-chain data can be divided into two categories:

  1. JWT (JSON Web Token) signature data: can be verified directly on the EVM (although the gas cost is high), or after being wrapped in a ZK proof. Polyhedra adopts the latter approach.

  2. TLS (Transport Layer Security) data: This can be proven with ZK-TLS, but current technology requires users to trust the MPC nodes used to reconstruct the TLS keys. ZK-TLS performs well for simple web pages or API data, but is more expensive for complex web pages or PDF documents.

In this context, Polyhedra introduced the ZK-TEE solution. We can run a TLS client in a trusted execution environment (TEE), generate a trusted computing proof through Google Confidential Computing, and then convert it into a ZK-TEE proof on the chain to achieve secure reading and verification of off-chain data.

The TLS client is a universal architecture that runs efficiently and supports almost all TLS connection scenarios, including but not limited to:

  • Visit financial websites such as Nasdaq to get stock prices

  • Operate stock accounts on behalf of users to conduct buy and sell transactions

  • Transfer fiat currency through online banking to achieve cross-domain bridging with traditional bank accounts

  • Search and book flights and hotels

  • Get real-time cryptocurrency prices from multiple centralized exchanges (CEX) and decentralized exchanges (DEX)

In AI scenarios, the credibility of non-blockchain data is particularly important. Todays large language models (LLMs) not only receive user input, but also dynamically obtain external data using search engines or LangGraph , Model Context Protocol (MCP) , etc. Through TEE, we can verify the authenticity of these data sources. For example, when solving mathematical problems, AI agents can call Wolfram Mathematica or remote Wolfram Alpha API services, and use TEE to ensure the integrity of these calling processes and results.

Privacy protection: building a trusted AI reasoning environment

Currently, zkBridge mainly uses ZK proofs to improve security and is not integrated with privacy chains. However, with the rise of AI applications (such as on-chain AI agents and trading robots), privacy has become a new round of core requirements. We are exploring multiple key application scenarios:

In the field of zero-knowledge machine learning (ZKML), one of the core applications is to verify the correct reasoning of private models. Such models usually keep parameters confidential (users do not need to know the specific parameters), and sometimes even commercial secrets such as model architecture are hidden. Private models are very common: OpenAIs ChatGPT , Anthropics Claude , and Googles Gemini are all in this category. There is a necessity for most of the current state-of-the-art models to remain closed source - they need to cover the high training and development costs through commercial benefits, and this situation is expected to continue for several years.

Although privatization has its rationale, in an automated environment, when model outputs directly trigger on-chain operations (such as token purchases and sales), especially when large amounts of money are involved, users often require stronger traceability and verifiability guarantees.

ZKML solves this problem by proving that the model is consistent in benchmarking and actual reasoning. This is particularly important for AI trading robots: after users select a model based on historical backtesting data, they can ensure that the model continues to run with the same parameters without knowing the specific parameter details, thus perfectly balancing verification requirements and privacy protection.

We are also exploring Trusted Execution Environment (TEE) technology because it can provide privacy protection for user input that ZK cannot achieve. ZKML requires the prover to have all the information including model parameters and user input. Although it is theoretically possible to combine zero-knowledge and multi-party computation (MPC), for large models, this combination will cause the verification speed to drop sharply - not only model reasoning, but also the entire proof process must be completed within MPC. In addition, MPC itself also has the risk of node collusion. TEE can effectively solve these problems.

TEE also plays a key role in the privacy protection of multi-party computation servers (MCP). The Verifiable MCP Marketplace that Polyhedra is developing will list MCP servers that achieve verifiability, traceability, and security through ZKP or TEE. When the model runs in the Proof Cloud equipped with TEE and only calls MCP services marked with privacy certification, we can ensure that user input data is always encrypted in the TEE environment and never leaked.

How does TEE work?

In the previous article, we have discussed the technical vision of Polyhedra and how the Trusted Execution Environment (TEE) and Zero-Knowledge Proof (ZKP) together form the key pillars of our product system. Next, we will further introduce the working principle of TEE.

TEE achieves fully enclosed protection of computing and data by creating a secure enclave, but this is only a basic capability. Its revolutionary value lies in achieving public verifiability through the remote attestation mechanism.

The remote authentication workflow consists of three key steps:

  • Enclave initialization phase: The CPU performs integrity measurement on the executable binary code in the secure enclave

  • Attestation generation phase: Generate publicly verifiable attestation through AMD Key Distribution Service (KDS) or Intel Attestation Service (IAS)

  • Certificate chain verification phase: The certificate contains a signature and a certificate chain, whose root certificate is issued by AMD or Intel respectively

When the certificate can be verified by the root certificate, two core facts can be confirmed:

  • Computations are actually performed in enclaves on AMD/Intel chips equipped with TEE technology

  • The key data such as program information and model output contained in the signature content are authentic and trustworthy

Polyhedra’s innovative breakthrough is that through ZK-TEE proof technology, TEE authentication proofs are compressed into streamlined proofs that can be efficiently verified on the chain. Taking zkBridge as an example, we will soon demonstrate how this technology can provide security for multiple products.

SGX, SEV and TDX: Selection and comparison of TEE technologies

In the process of building a product system supported by the Trusted Execution Environment (TEE), Polyhedra conducted in-depth research on the three current mainstream TEE technology implementations, namely:

The following is our comparative analysis of these three technologies, as well as our thoughts on actual selection:

SGX: The earliest TEE technology to be implemented

Intel SGX is one of the longest-standing trusted execution environment (TEE) solutions currently available. However, among the mainstream cloud service providers, only Microsoft Azure supports SGX, while Google Cloud and AWS have turned to supporting alternatives such as SEV and TDX.

The core mechanism of SGX is to demarcate an isolated memory area (Enclave) inside the Intel processor, and the CPU directly manages and controls access to this memory area. Through the REPORT instruction, the CPU measures the execution code in the enclave to generate a trusted proof to ensure that the binary code running in it is in a deterministic and reproducible state when it is created.

This model has some notable low-level properties:

  • Developers must ensure that the programs and data within the enclave are in a consistent state when they are created;

  • And ensure that it acts as a trusted computing root (Root of Trust) and does not rely on any unverified external input or dynamically loaded code.

This underlying design has made SGX unfriendly to developers for almost the entire past decade. Early SGX development could only use C/C++ to write enclave programs, which not only failed to support common operating system features such as multithreading, but also often required major changes to the original application and even its dependent libraries.

In order to simplify the development process, developers have tried to deploy SGX applications on virtualized operating systems (such as Gramine ) in recent years. Gramine provides operating system-like encapsulation to help developers adapt to the SGX environment without completely rewriting the code. However, Gramine still needs to be used with extreme caution : if some commonly used Linux libraries are not fully supported, program exceptions may still occur. Especially when pursuing performance optimization, a lot of adjustments still need to be made to the underlying implementation.

It is worth noting that the industry has already seen the emergence of more feasible alternatives: AMD SEV and Intel TDX. While ensuring security and reliability, they avoid the numerous development barriers faced by SGX and provide greater flexibility and practicality for building privacy computing infrastructure.

SEV and TDX: Trusted Computing Solutions for Virtualization

Unlike SGX, which only protects a small memory area called an enclave, AMD SEV and Intel TDX are designed to protect the entire virtual machine (VM) running on an untrusted host. The logic behind this design idea comes from the architectural characteristics of modern cloud computing infrastructure. For example, cloud service providers such as Google Cloud usually run hypervisors (bare metal hypervisors) on physical servers to schedule virtual computing nodes from multiple users on the same machine.

These hypervisors widely use hardware-level virtualization technologies, such as Intel VT-x or AMD-V, as an alternative to the less performant software virtualization methods that are gradually being phased out.

In other words, in a cloud computing environment, the CPU itself has the ability to identify and isolate virtual machines and hypervisors. The CPU not only provides a data isolation mechanism across virtual machines to ensure fair resource allocation, but also virtualizes and isolates network and disk access. In fact, the hypervisor is increasingly being simplified to a software front-end interface, and the underlying hardware CPU is the one that actually undertakes the task of virtual machine resource management.

Therefore, it becomes natural and efficient to deploy protected execution environments (enclaves) on top of cloud virtual machines, which is the core design goal of SEV and TDX.

Specifically, these two technologies ensure that virtual machines still have trusted computing capabilities in untrusted environments through the following mechanisms:

  • Memory encryption and integrity protection: SEV and TDX encrypt virtual machine memory at the hardware layer and add an integrity verification mechanism. Even if the underlying hypervisor is maliciously tampered with, it cannot access or modify the data content inside the virtual machine.

  • Remote Attestation Mechanism: They provide remote attestation capabilities for virtual machines by integrating a Trusted Platform Module (TPM). The TPM measures the initial state of the virtual machine at startup and generates a signed attestation to ensure that the virtual machine runs in a verifiable and trusted environment.

Although SEV and TDX provide a powerful VM-level protection mechanism, there is still a key challenge in actual deployment, which is also a common pitfall in many projects: TPM only measures the boot sequence of the VM operating system by default, and does not involve the specific applications running on it.

There are two common approaches to ensuring that remote attestation covers application logic running inside a virtual machine:

Method 1: Hardcode the application into the OS

This method requires the virtual machine to boot into a hardened operating system that can only execute target applications, fundamentally eliminating the possibility of running any unexpected programs. The recommended practice is to use the dm-verity mechanism proposed by Microsoft: at startup, the system only mounts a read-only disk image, the hash of which is public and fixed, ensuring that all executable files are verified and cannot be tampered with or replaced. The verification process can be completed through AMD KDS or Intel IAS.

The complexity of this approach is that the application must be reconstructed as part of a read-only disk structure. If temporary writable storage is required, a memory file system or encrypted/integrity-checked external storage must be used. At the same time, the entire system must be packaged in the Unified Kernel Image (UKI) format, including applications, operating system images, and kernels. Although the implementation cost is high, it can provide a highly deterministic trusted execution environment.

Method 2: Use Google Confidential Space (recommended)

Google Confidential Space provides a hosted solution, which is essentially an abstraction and encapsulation of method 1. Its core idea is the same as the former: to ensure the trustworthiness of the entire virtual machine environment, but developers only need to build a standard Docker container image without manually configuring the kernel and disk image. Google will be responsible for the underlying hardened OS image and remote attestation configuration, which greatly simplifies the development process.

We will further share the technical solution implementation based on Confidential Space in future blogs, including details such as key management and deployment strategies.

Summary of TEE application in Polyhedra product system

1. Bridges

In the implementation of the bridge protocol, Polyhedra will add additional security checks based on the existing zero-knowledge proof (ZK) or state committee. These checks may include running a light client (if available) or interacting with the corresponding chain through multiple standardized RPC API services to ensure the security and reliability of data transmission.

2. Zero-Knowledge Machine Learning (ZKML)

In the ZKML world, Polyhedra might run a TEE agent that calls the Google Vertex API or external AI API services for reasoning and verifies that the model output comes from the Vertex API and has not been tampered with; or run the AI model directly through confidential computing on Nvidia GPUs without using the Google model library. It should be noted that in this solution, privacy protection is a byproduct. We can easily hide the parameters, inputs, and outputs of the model to ensure the privacy of the data.

3. Verifiable AI Marketplace

For the Verifiable AI Marketplace, including the MCP server, Polyhedra adopts a similar strategy: by running a TEE proxy, or directly running the application when possible. For example, in an MCP service that requires mathematical solving, we can choose to build a TEE proxy to connect to Wolfram Alpha, or run a local copy of Mathematica directly. In some scenarios, we must use a TEE proxy, such as when interacting with a flight booking system, Slack, or a search engine. It is important to note that the TEE can also transform a non-MCP-compliant service (such as an arbitrary Web2 API) into an MCP-compliant service by converting the architecture and format between services through a proxy.

Outlook: TEE will accelerate product implementation and bring multiple values

The introduction of TEE technology is an important addition to the Polyhedra technology stack. In the future, we will first deploy it in the cross-chain bridge module and gradually promote it to the AI reasoning and decentralized service market. TEE technology will significantly reduce user costs, accelerate transaction finalization, achieve interoperability of more ecosystems, and provide users with new privacy protection features.

This article is from a submission and does not represent the Daily position. If reprinted, please indicate the source.

ODAILY reminds readers to establish correct monetary and investment concepts, rationally view blockchain, and effectively improve risk awareness; We can actively report and report any illegal or criminal clues discovered to relevant departments.

Recommended Reading
Editor’s Picks