Risk Warning: Beware of illegal fundraising in the name of 'virtual currency' and 'blockchain'. — Five departments including the Banking and Insurance Regulatory Commission
Information
Discover
Search
Login
简中
繁中
English
日本語
한국어
ภาษาไทย
Tiếng Việt
BTC
ETH
HTX
SOL
BNB
View Market
Detailed explanation of DFINITY's revolutionary blockchain computer architecture
DfinityFun
特邀专栏作者
2021-01-18 10:05
This article is about 4586 words, reading the full article takes about 7 minutes
A revolutionary decentralized full-stack architecture that integrates front-end and back-end, and is hosted by software containers in one step.

Editor's Note: This article comes fromDfinityFun(ID:DfinityFun), reprinted by Odaily with authorization.

Editor's Note: This article comes from

, reprinted by Odaily with authorization.
Finishing: Blockpunk
Community: Nutshell Universe (ID: DfinityFun)Internet Computer is the network name of DFINITY. DFINITY tries to use an open protocol built on the upper layer of the IP protocol to decentralize the resources of all computers running the protocol, and use blockchain, cryptography and other technologies to virtualize A safe and reliable software operating environment that does not require traditional components.Parallel processing, dynamic expansion, and flexible governance, can these become the latecomer advantages of DFINITY?

", here is DFINITY's official introduction to the infrastructure of the Internet computer development platform, and how DFINITY's innovative next-generation smart contract-"software container" allows decentralized Internet services to scale to billions of users .

secondary title

Network Neuron System (NNS)

Internet computers are based on a blockchain computing protocol called the Internet Computer Protocol (ICP). The network itself is designed according to a hierarchical structure. The bottom layer is an independent data center hosting dedicated hardware nodes (Nodes). These node machines are grouped together to create subnets (Subnets). The subnet master runs software containers (Canisters), which are interoperable basic computing units uploaded by users, and the containers contain code and state.

Unique to the ICP is its Network Neuron System (NNS), which controls, configures, and manages the network.

Ultimately, data center owners and neuron owners can acquire ICP tokens and trade with container owners and managers. Container owners and managers convert these tokens into cycles, which are then used to recharge their containers (i.e. gas fees). For example, when these containers perform calculations or store memory, cycles need to be consumed throughout the process. After the cycles are exhausted, more cycles must be replenished for them to continue running. This is the deflationary part.

subnet

secondary title

subnet

To understand Internet computers, you must first understand the concept of subnets, which are the basic building blocks of an entire network. Subnets are responsible for hosting different subsets of software containers of Internet computer networks. Under NNS control, node machines drawn from different data centers are gathered together to create subnets. These nodes cooperate via ICP to symmetrically replicate data and computation related to the software containers they host.

NNS incorporates nodes from independent data centers when constructing subnets. By using Byzantine fault tolerance technology and cryptography technology developed by DFINITY, the ICP protocol can ensure that the subnet is tamper-proof and never goes down. Although subnets are a fundamental part of the overall Internet computer network, they are user- and software-agnostic. Users and container software only need to know the identity of the container to call its shared functions.

This insensitivity is an extension of the basic design principles of the Internet. On the (traditional) Internet, if a user wants to connect to some software, they need to know the IP address of the computer running the software and the TCP port that the software is listening on. On an Internet computer, if a user wants to call a function, they only need to know the identity of the container and the function signature. In the same way that the Internet created seamless connections, DFINITY creates a seamless world for software, where any licensed software can directly call any other software, without knowledge of the underlying workings of the network.

Internet computers also ensure subnet insensibility in other ways. NNS can split and merge subnets to balance the load across the network. Of course, this is also unaffected for hosted containers.

In this example, we have a virtual subnet ABC that hosts 11 software containers. NNS indicates the need to split the network scaling performance, in which case subnet ABC will continue to host containers 1–6, but at the same time generate a new subnet XYZ to host containers 7–11. During scaling, all containers involved experience no interruption of service.

Each subnet type provides certain properties and functionality to your container. For example, if your container is hosted in the data subnet, it can handle calls, but not other containers. To call other containers, you need a system subnet. A trusted subnet is required if you want your container to be able to maintain a balance of ICP tokens or send cycles to other containers. For these reasons, governance containers should only be hosted in trusted subnets.

The functionality of a subnet derives in part from the fault tolerance of the underlying layer. This is exciting territory in fundamental science that we hope to share with the public soon, including new encryption techniques that allow NNS to repair broken subnets.

secondary title

container

The purpose of a subnet is to host containers. Containers run in dedicated hypervisors and interact with other containers through publicly specified APIs. A container contains the WebAssembly bytecode that runs on the WebAssembly virtual machine and the in-memory data pages that run inside it. Typically, this WebAssembly bytecode is created by compiling a programming language such as Rust or Motoko. This bytecode will contain a runtime that allows developers to easily interact with the API.

On the Internet computer, there are two ways to call the function shared by the container: update call or query call. The essential difference is that when you call a function as an update call, any changes it makes to the data in the container's memory persist, whereas when you call the function as a query call, any changes it makes to memory remain in the will be discarded after execution.

Changes from update calls are persistent, and they are tamper-proof because the ICP blockchain computer protocol runs them on every node in the subnet. As you might expect, these calls are in a consistent global call order, allowing concurrent execution in a fully deterministic execution environment. The update call completes in under two seconds.

In contrast, query calls do not persist changes. Any changes they make to memory are discarded after running. They are very efficient and cheap, and only take a few milliseconds to complete. This is because they are not running on all nodes in the subnet, which also means they provide a lower level of security.

In this example, the user is requesting a customized news page, and the newly generated content is immediately available.

secondary title

Automatic Storage, Orthogonal Persistence

The way developers preserve data is one of the coolest things about internet computers. Developers don't have to think about persistence, they just write code and persistence happens automatically. This is called orthogonal persistence. This is because Internet computers keep memory data pages in containers.

You might be wondering how this all works. A container is a software enforcer for update calls that cause changes to pages of memory data. This means that at any given time, there can only be one thread of execution inside the container.

Finally, containers can create new containers and containers can fork themselves. You simply specify the WebAssembly bytecode to create a new container, and the in-memory data page starts empty. When a container forks, a newly spawned copy is created that is identical to the internal memory pages. When it comes to creating scalable Internet services, forks are extremely powerful.

scalability

secondary title

scalability

Now let's talk about the scalability of Internet computer services. Different types of containers have their own upper limit of capacity. For example, a container can only store 4GB of memory due to limitations in the WebAssembly implementation. Therefore, when we want to create Internet services that can scale to billions of users, we must use a multi-container architecture.

We may wish to create a special container to create many copies of the container, and then shard user content into different containers to create a scalable internet service. However, this architecture is oversimplified for a number of reasons.

It is true that each additional container increases the overall memory capacity, and increasing the number of containers increases the overall update and query call throughput, but we cannot scale query call requests for specific user content. Whenever we increase system capacity by adding more container shards, we need to rebalance user content, which is not really a scalable architecture at the edge. There is also no good way to serve data from the closest replica to end users at query time. We need a two-tier architecture, a front-end container and a back-end container.

Internet Computer provides some interesting features to anchor end users to front-end containers, such as allowing domain names to be mapped to multiple front-end containers via NNS (DNS-like). When an end user wishes to resolve such a domain name, the Internet computer will look at the total number of all replica nodes in all subnets hosting the front-end container and return the IP address of the closest replica node. This results in end users executing queries on the closest replica, reducing inherent network latency and improving user experience, providing the benefits of edge computing without a content delivery network (CDN).

To take full advantage of this feature, we need a classic two-tier structure involving a front-end container and a back-end bucket container. In this example, the web browser wants to load a profile picture.

Step1, the web browser will be mapped to a front-end container running on a subnet with nearby nodes. The web browser will then submit query calls to nearby nodes to retrieve photos.

Step2, the front-end container will send a cross-container query call request to the data container that saves the photos.

Step3, if the query call response returned by the data storage container involves static content (such as a photo), the data can be stored in the cache. In this case, the replica node running the query call of the front-end container can store the response (ie data) of the query call in its query cache.

Step4, the query call caching mechanism is completely insensitive to the front-end container code. Once the user-invoked front-end container has gathered all the necessary information, it can return content via query call responses or HTTP requests.

Over time, a node's query cache accumulates static content and generates data of interest to nearby users, giving them a faster and better user experience. In this way, the native edge architecture of the Internet computer provides the advantages of a content delivery network without requiring developers to do anything special or enlist the help of a separate proprietary service.

Once the UX/UI running on a web browser or smartphone has determined which front-end container is responsible for coordinating changes to some content or data, it can submit update calls through standard interfaces to modify the content or data.

This front-end container then typically makes more cross-container update calls to implement the desired changes.

secondary title

open internet service

To summarize, let's discuss the design of open internet services using a two-tier architecture of front-end containers and back-end data containers. First, when you code your front-end container, you'll simplify your work by using an existing library class called BigMap.

public chain
Developer
Welcome to Join Odaily Official Community