Risk Warning: Beware of illegal fundraising in the name of 'virtual currency' and 'blockchain'. — Five departments including the Banking and Insurance Regulatory Commission
Information
Discover
Search
Login
简中
繁中
English
日本語
한국어
ภาษาไทย
Tiếng Việt
BTC
ETH
HTX
SOL
BNB
View Market
Research report: 30,000 words to understand the Rust language
星球君的朋友们
Odaily资深作者
2021-06-23 06:18
This article is about 43880 words, reading the full article takes about 63 minutes
Rust has been the most popular language on the StackOverflow language list for five years in a row.

This article comes fromThis article comes fromJue Xue Society

, author Zhang Handong, reproduced with authorization.

  • Content directory:

  • Wenqian

  1. reliability

  2. reliability

  3. productive forces

  4. productive forces

  5. Rust and open source

  6. Weaknesses of the Rust language

  • Rust ecological base library and tool chain

  1. Rust Industry Application Inventory

  2. cloud native

  3. operating system

  4. operating system

  5. machine learning

  6. game

  7. game

  8. client development

  9. Blockchain/Digital Currency

  • Take stock of companies using Rust in production

  1. domestic

  2. domestic

  3. npm

  • about the author

mdnice editor

Wenqian

The Rust language is a general-purpose system-level programming language that is known for being free of GC and guaranteeing memory safety, concurrency safety, and high performance. It has been developed privately by Graydon Hoare since 2008, and was sponsored by Mozilla in 2009. Version 0.1.0 was first released in 2010 for the development of the Servo engine. Version 1.0 was released on May 15, 2015.

Since its release, as of today in 2021, after six years of development, Rust has been steadily rising and has gradually matured and stabilized.From 2016 to 2021, Rust has become the

The most popular language on the StackOverflow language list [1].

On February 9, 2021, the Rust Foundation was announced. Huawei, AWS, Google, Microsoft, Mozilla, Facebook and other leading giants in the technology industry have joined the Rust Foundation as platinum members to promote and develop the Rust language on a global scale.

So what is the charm of the Rust language that can make developers and giant companies so interested?

Note: The data listed in this article are all from public content on the Internet.

mdnice editor

Get to know the Rust language


  • Programming language design has long existed in conflict between two seemingly irreconcilable aspirations.

  • Safe ( safe ). We want a strong type system that statically eliminates a lot of errors. We want automatic memory management. We want data encapsulation so that we can enforce immutable object representations of private variables and ensure that they will not be corrupted by untrusted code.


control. At least for system programming programs such as web browsers, operating systems, or game engines, where performance or resource constraints are an important issue, we want to understand the byte-level representation of data. We want to optimize the time and space usage of our programs using low-level programming techniques. We want to use bare metal when needed.

However, according to the traditional view, you can't have your cake and eat it too. Languages ​​like Java give us great security guarantees, but at the cost of sacrificing control over the underlying layers. As a result, for many systems programming applications, the only realistic option is to use a language like C or C++ that provides fine-grained control over resource management. However, obtaining this control comes at a high cost. For example, Microsoft recently reported that 70% of the security bugs they patched were due to memory safety violations33[2] and were problems that could be ruled out by a strongly typed system. Likewise, Mozilla reported that the vast majority of critical bugs they found in Firefox were memory-related16 [3].

Wouldn't it be nice if one could somehow have the best of both worlds: programming the security system while having control over the underlying layers. Thus, the Rust language was born.

The official website introduces Rust like this: a language that empowers everyone to build reliable and efficient software.


  • There are three major advantages of the Rust language that deserve everyone's attention:

  • high performance. Rust is amazingly fast and extremely memory efficient. With no runtime and no garbage collection, it is capable of performing particularly demanding services, can run on embedded devices, and can be easily integrated with other languages.

  • reliability. Rust's rich type system and ownership model guarantee memory safety and thread safety, allowing you to eliminate all kinds of errors at compile time.


productive forces. Rust has excellent documentation, a friendly compiler, and clear error messages. It also integrates first-class tools-package managers and build tools, intelligent auto-completion and multi-editor support for type checking, and automatic formatting code etc.

Rust is low-level enough that it can be optimized like C for maximum performance if necessary.

The higher the abstraction level, the more convenient memory management, the richer the available libraries, the more Rust program code, the more things to do, but if not controlled, it may lead to program bloat.

However, Rust programs are also well-optimized, sometimes better than C, which is good for writing minimal code on a byte-by-byte-by-pointer level, while Rust has the power to efficiently combine multiple functions or even The whole library is put together.

The Rust language also supports asynchronous programming with high concurrency and zero cost. The Rust language should be the first system-level language to support asynchronous programming.

mdnice editor

Rust vs C

Rust vs Cpp

Rust vs Go

High performance comparable to C/Cpp

Programs written in Rust should have similar runtime speed and memory usage as programs written in C, but the two languages ​​have different overall programming styles and it is difficult to generalize about their performance.


  • In general:

  • Abstraction is a double-edged sword. The Rust language has a higher degree of abstraction than the C language, and the abstraction will hide some code that is not so optimized, which means that the performance of the default implementation of Rust code is not the best. So, your Rust code must be optimized to achieve C-like performance. Unsafe Rust is a high-performance export.

  • Rust is thread-safe by default, eliminating data races and making multi-threaded concurrent programming more practical.


Rust is indeed faster than C in some ways. In theory, C language can do anything. But in practice, C abstraction ability is relatively low, not so modern, and development efficiency is relatively low. C can be made faster than Rust in these ways, as long as developers have unlimited time and effort.

Because the C language is sufficient to represent high performance, let's talk about the similarities and differences between C and Rust. If you are familiar with C/Cpp, you can also evaluate Cpp and Rust based on this comparison.

Both Rust and C are hardware direct abstractions

Both Rust and C are direct hardware abstractions, and both can be thought of as a "portable assembler"."Both Rust and C can control the memory layout of data structures, integer size, stack and heap memory allocation, pointer indirect addressing, etc., and can generally be translated into understandable machine code, and the compiler rarely inserts"。

magic"Even though Rust has higher-level constructs than C, such as iterators, traits, and smart pointers, they are designed to be predictably optimized to simple machine code (aka")。

zero-cost abstractionThe memory layout of Rust's types is simple, e.g., growable strings String and Vec

Exactly {byte*, capacity, length}. Rust doesn't have any concept like move or copy constructors in Cpp, so object passing is guaranteed to be no more complicated than passing pointers or memcpy .

Rust borrow checking is simply the compiler's static analysis of references in code. Lifetime information was completely extracted long before the mid-level intermediate language (MIR) was generated.

Instead of traditional exception handling, Rust uses return value-based error handling. But you can also use panic (Panic) to handle abnormal behavior like in Cpp. It can be disabled at compile time (panic = abort), but even then, Rust doesn't like to mess with Cpp exceptions or longjmp.

Same LLVM backend

Rust has great integration with LLVM, so it supports link-time optimizations, including ThinLTO, and even inlining across C/C++/Rust language boundaries. There is also support for Profile-guided Optimization (PGO). Although rustc generates LLVM IR more verbose than clang, the optimizer still handles it well.

The C language is compiled faster with GCC than with LLVM, and now some people in the Rust community are developing the Rust front end of GCC.

In theory, because Rust has stricter immutability and aliasing rules than C, it should have better performance optimization than C language, but in practice it does not have such an effect. Optimization beyond C is currently a work in progress in LLVM, so Rust still hasn't reached its full potential.

Both allow manual optimization, with some minor exceptions

Rust's code is low-level and predictable enough that it can be tuned by hand to what assembly code it optimizes to.

Rust supports SIMD, with fine control over inlining and calling conventions.

Rust and C are similar enough that some analysis tools for C can often be used for Rust.

In general, if performance is absolutely critical, and hand-optimization is needed to squeeze every last ounce of performance, then optimizing Rust is no different than optimizing C.


  • But in some of the lower-level features, Rust has no particularly good alternatives.

  • goto. Goto is not provided in Rust, but you can use the loop break label instead. In C language, goto is generally used to clean up memory, but Rust does not need goto because of its deterministic destructor function. However there is a non-standard goto extension that is useful for performance optimization.

  • Stack memory allocation alloca and C99 variable-length arrays can save memory space and reduce the number of memory allocations. But these are controversial even in C, so Rust stays away from them.


Some overhead of Rust compared to C


  • If not hand-optimized, Rust also has some overhead because of its abstraction.

  • Rust's lack of implicit type conversion and usize-only indexing restricts developers to using this type, even for smaller data types. Using usize as an index on a 64-bit platform is easier to optimize without worrying about undefined behavior, but the extra bits may put more pressure on registers and memory. Whereas in C you can choose 32-bit types.

  • Strings in Rust always carry pointers and lengths. But many functions in C code only accept pointers regardless of size.

  • With iterations like for i in 0...len {arr[i]}, performance depends on the LLVM optimizer being able to prove the length match. Sometimes, it can't, and bounds checking also inhibits autovectorization.

  • The C language is relatively free, with many "smart" usage techniques for memory, but not so free in Rust. But Rust still gives a lot of control over memory allocation, and can do basic things like memory pooling, merging multiple allocations into one, preallocating space, and more.

  • Without familiarity with Rust's borrow checking, Clone might be used to escape using a reference.

  • I/O in Rust's standard library is not cached, so it needs to be wrapped with BufWriter. This is why some people say that the code written in Rust is not as fast as Python, because 99% of the time is spent on I/O.


executable file size

Every operating system has some built-in standard C library, which has about 30MB of code. Executable files in the C language can use these libraries "for free"."Hello World "a small

A level C executable can't actually print anything, it just calls printf provided by the operating system.

Not so with Rust, where Rust executables bundle their own standard library (300KB or more). Fortunately, this is only a one-time overhead that can be reduced."no-std"For embedded development, the standard library can be turned off, using", Rust will generate"bare

code."On a per-function basis, Rust code is about the same size as C, but with a"Generic inflation

The problem. Generic functions have optimized versions for each type they are used in, so it is possible to have 8 versions of the same function, and the cargo-bloat[4] library helps to find these problems.

Working with dependencies in Rust is very easy. Similar to JS/npm, small and single-purpose packages are now recommended, but they do grow. The cargo-tree command is very useful for pruning them down.

  • Where Rust slightly beats C

  • To hide implementation details, C libraries often return opaque data structure pointers and ensure that each instance of the structure has exactly one copy. It consumes the cost of heap allocation and pointer indirection. Rust's built-in privacy, single-ownership rules, and coding conventions allow libraries to expose their objects without indirection so that the caller can decide whether to place them on the heap or on the stack. Objects on the stack can be aggressively or thoroughly optimized.

  • By default, Rust can inline functions from the standard library, dependencies, and other compilation units.

  • Rust rearranges structure fields to optimize memory layout.

  • Strings carry size information, making length checks fast. and allows in-place generation of substrings.

  • Similar to C++ templates, generic functions in Rust are monomorphized, producing copies of different types, so functions like sort and containers like HashMap are always optimized for the corresponding type. With C, you have to choose between modifying macros or less efficient functions that handle void* and runtime variable sizes.

  • Rust's iterators can be combined into chains that are optimized together as a unit. So instead of a series of calls to write multiple times to the same buffer, you can call it.buy().use().break().change().mail().upgrade().

  • Likewise, through the Read and Write interfaces, receive some unbuffered stream data, perform CRC checks on the stream, then transcode, compress, and write to the network, all in one call. While it should be possible to do it in C, without generics and traits, it would be very hard to do.

  • The Rust standard library has built-in high-quality containers and optimized data structures, which are more convenient to use than C.

  • Rust's serde is one of the fastest JSON parsers in the world, and the experience of using it is very good.


Rust's clear advantages over C


  • Mainly two points:

  • Rust eliminates data races, is inherently thread-safe, and liberates multi-threaded productivity. This is where Rust is clearly superior to languages ​​such as C/Cpp.

  • The Rust language supports asynchronous high-concurrency programming.


Rust supports safe compile-time evaluation.

thread safety

Even in third-party libraries, Rust enforces thread safety for all code and data, even if the author of that code didn't pay attention to thread safety. Everything obeys a specific thread safety guarantee, or does not allow cross-thread use. When you write code that is not thread-safe, the compiler will pinpoint exactly where it is unsafe.

There are already many libraries in the Rust ecosystem, such as data parallelism, thread pools, queues, tasks, lock-free data structures, etc. With the help of such components, and the strong safety net of the type system, concurrency/parallelization of Rust programs is quite easy. In some cases it is fine to use par_iter instead of iter, and as long as it compiles, it should work! This isn't always a linear speedup (Amdahl's law is brutal), but it's often a 2-3x speedup with relatively little work.

Extension: Amdahl's Law, a rule of thumb in computer science, named after Gene Amdahl. It represents the ability of processors to improve efficiency after parallel computing.

There is an interesting difference between Rust and C when it comes to documenting thread safety.

Rust has a glossary of terms used to describe specific aspects of thread safety, such as Send and Sync, guards and cells.

For the C library, there is no such thing as "it can be allocated on one thread and freed on the other, but it cannot be used from both threads at the same time".

Rust describes thread safety in terms of data types, which generalizes to all functions that use them.

For the C language, thread safety involves only individual functions and configuration flags.

Rust's guarantees are generally provided at compile time, at least unconditionally.

For the C language, it is common to say "this is thread-safe only if the turboblub option is set to 7".

asynchronous concurrent

The Rust language supports the async/await asynchronous programming model.

This programming model is based on a concept called Future, also called Promise in JavaScript. A Future represents an unresolved value that you can perform various operations on before it is resolved to obtain that value. In many languages, not much work has been done on Future. This implementation supports many features such as combinators (Combinator), especially on this basis to achieve a more ergonomic async/await syntax.

Futures can represent all kinds of things, and are especially useful for representing asynchronous I/O: when you make a network request, you get a Future object immediately, and once the network request completes, it will return any values ​​that the response may contain ; You can also mean things like "timeout", "timeout" is actually a Future that is resolved after a certain time; even work that is not I/O or needs to be placed in a thread pool to run CPU-intensive work can also be represented by a Future, which will be resolved after the thread pool completes the work.

The problem with Future is that the way it's represented in most languages ​​is this callback-based approach where you specify what callback function to run after the Future is resolved. That is, the Future is responsible for figuring out when it's resolved, it will run no matter what your callback is; and all the inconveniences are built into this model as well, and it's very hard to use because there's already a lot of work done by a lot of developers and found that they had to write a lot of allocation code and use dynamic dispatch; in fact, each callback you try to dispatch must get its own independent storage space, such as crate objects, heap memory allocation, these allocations and dynamic dispatch. everywhere. This approach doesn't satisfy the second principle of zero-cost abstraction, and if you're going to use it, it's going to be a lot slower than writing it yourself, so why would you even use it.

The scheme is different in Rust. Instead of the Future scheduling callbacks, a component called an executor polls the Future. The Future may return "not yet ready (Pending)", or it may return "Ready (Ready)" after being resolved. This model has many advantages. One of the advantages is that you can cancel a Future very easily, because canceling a Future simply requires you to stop holding the Future. With a callback-based approach, it is not so easy to cancel and stop it through scheduling.

At the same time, it also allows us to establish a really clear abstract boundary between different parts of the program. Most other Future libraries have an event loop (event loop), which is also the way to schedule your Future to perform I/O, but the actual You don't have any control over this.

In Rust, however, the boundaries between components are very clean, the executor is responsible for scheduling your Future, the reactor handles all the I/O, and then your actual code. So end users can decide for themselves what actuators to use, use which reactors they want to use, and thus have more control, which is really important in systems programming languages.

And the most important real advantage of this model is that it allows us to implement this kind of state machine future in a perfect way with real zero cost. That is, when the Future code you write is compiled into actual local (native) code, it is like a state machine; in this state machine, each I/O pause point has a variant (variant), Instead, each variant saves the state needed to resume execution.

And what's really useful about this Future abstraction is that we can build other APIs on top of it. State machines can be built by applying these combinator methods to Future s, and they work like adapters for Iterators (e.g. filter, map). But this method has some disadvantages, especially the readability is very poor, such as nested callbacks. That's why it is necessary to implement async / await asynchronous syntax.

In the current Rust ecosystem, there is already a mature tokio[5] runtime ecosystem that supports asynchronous I/O such as epoll. If you want to use io_uring, you can also use Glommio[6], or wait for tokio's support for io_uring. Even, you can use the async_executor[7] and async-io[8] provided by the smol runtime to build your own runtime.

compile-time calculation

Rust can support compile-time constant evaluation similar to Cpp. This is clearly superior to C.

mdnice editor

reliability

reliability

In June 2020, 5 scholars from 3 universities published a research result at the ACM SIGPLAN International Conference (PLDI'20), conducting a comprehensive investigation of security flaws in open source projects using the Rust language in recent years . This study examines five software systems developed in the Rust language, five widely used Rust libraries, and two vulnerability databases. The investigation involved a total of 850 unsafe code usages, 70 memory safety flaws, and 100 thread safety flaws.

In their investigation, the researchers looked at not only bugs reported in all vulnerability databases and bugs publicly reported in software, but also commit records in all open source software code repositories. Through manual analysis, they define the type of BUG that the submission fixes, and classify it into the corresponding memory safety/thread safety issue. All investigated issues are organized into a public Git repository: https://github.com/system-pclub/rust-study[9]



  • Explanation of the survey results:


  • The safe code of the Rust language is very effective for checking space and time memory safety issues. All memory safety issues in the stable version are related to unsafe code.


  • Although memory safety issues are all related to unsafe code, a large number of problems are also related to safe code. Some problems even stem from encoding errors in safe code rather than unsafe code.


  • Thread-safety issues, whether blocking or non-blocking, can occur in safe code, even if the code fully complies with the rules of the Rust language.


  • A large number of problems are caused by coders who do not correctly understand the lifecycle rules of the Rust language.


It is necessary to build new defect detection tools for typical problems in the Rust language.

So how is the security of Rust behind this survey report guaranteed? Unsafe Rust and why Unsafe?

Ownership: Rust Language Memory Safety Mechanisms

Rust's design draws heavily on academic research on safe systems programming. In particular, Rust's design is distinguished by its adoption of a proprietary type system (often referred to in the academic literature as an affine or substructure type system36[10]) compared to other mainstream languages.

The ownership mechanism is the safe programming semantics and model expressed by the Rust language with the help of the type system to carry its "memory safety" idea.



  • Memory insecurity issues addressed by the ownership mechanism include:


  • A null pointer is referenced.


  • Use of uninitialized memory.


  • Use-after-free, that is, using a dangling pointer.


  • Buffer overflow, such as an array out of bounds.


Illegal release of pointers that have already been released or unallocated pointers, that is, repeated releases.

Note that memory leaks are not a memory safety issue, so Rust doesn't solve memory leaks either.


  • In order to ensure memory safety, the Rust language has established a strict safe memory management model:

  • ownership system. Each allocated memory has a pointer that has exclusive ownership of it. Only when the pointer is destroyed can the corresponding memory be released accordingly.


Borrowing and lifetime. Every variable has its life cycle, and once the life cycle is exceeded, the variable will be released automatically. If it is a borrow, you can prevent dangling pointers, that is, use-after-free situations, by marking the lifetime parameters for compiler inspection.

The ownership system also includes the RAII mechanism borrowed from modern C++, which is the cornerstone of Rust's GC-free but safe memory management.


  • After the safe memory management model is established, it can be expressed by the type system. Rust borrows the following features from Haskell's type system:

  • no null pointer

  • immutable by default

  • expression

  • higher order functions

  • algebraic data types

  • pattern matching

  • generic

  • traits and associated types


local type deduction


  • To achieve memory safety, Rust also has the following unique features:

  • Affine Type, which is used to express Move semantics in Rust ownership.


Borrow, life cycle.

With the power of the type system, the Rust compiler can check the type at compile time to see if it satisfies the safe memory model, and can detect memory insecurity problems at compile time, effectively preventing the occurrence of undefined behavior.

The internal causes of memory safety bugs and concurrent safety bugs are the same, both are caused by improper access to memory. Likewise, Rust also addresses the issue of concurrency safety with a strong type system loaded with ownership. The Rust compiler will check and analyze all data competition problems in multi-threaded concurrent code at compile time through static inspection analysis.

Unsafe Rust: dividing security boundaries

In order to integrate well with the existing ecosystem, Rust supports a very convenient and zero-cost FFI mechanism, is compatible with C-ABI, and divides the Rust language into Safe Rust and Unsafe Rust from the language architecture level.

Among them, Unsafe Rust specializes in dealing with external systems, such as the operating system kernel. The reason for this division is that the inspection and tracking of the Rust compiler is limited. It is impossible to check the security status of other external language interfaces, so it can only be guaranteed by the developers themselves.



  • The ultimate goal of Rust is not to completely eliminate those danger points, because at some point, we need to be able to access memory and other resources. In fact, Rust's goal is to abstract away all unsafe elements. When thinking about security, you need to think about the "attack surface", or which parts of a program we can interact with. Something like a parser is a big attack surface because:


  • They are usually accessible by attackers;

    The data provided by the attacker can directly affect the complex logic that is often required for parsing.


You can break it down further by breaking down the traditional attack surface into an "attack surface" (the part that can directly affect the program code) and a "security layer", which is the code that the attack surface depends on, but is inaccessible, and there may be potential Bug. In C, they are the same: arrays in C are not abstract at all, so if you read a variable number of items, you need to make sure that all invariants remain constant, because it is not safe to layer, where errors may occur.

Therefore, Rust provides the unsafe keyword and unsafe block, which explicitly distinguishes safe code from unsafe code accessing external interfaces, and also provides convenience for developers to debug errors. Safe Rust means that developers will trust the compiler to ensure safety at compile time, while Unsafe Rust means that the compiler will trust the developer's ability to ensure safety.

Where there are people, there are bugs. Through the exquisite design of the Rust language, the parts that the machine can check and control are handed over to the compiler for execution, while the parts that the machine cannot control are handed over to the developers themselves.

What Safe Rust guarantees is that the compiler maximizes memory safety at compile time and prevents undefined behavior from happening.

Unsafe Rust is used to remind developers that the code developed at this time may cause undefined behavior, please be careful! Humans and compilers share the same "security model", trust each other, and harmonize with each other, so as to maximize the elimination of the possibility of human bugs.

Unsafe Rust, is the security boundary of Rust. The nature of the world is Unsafe. You can't avoid it. Some people say that because of the existence of Unsafe Rust, it is not necessarily safer than C/C++? Unsafe Rust is indeed the same as C/C++, it depends on people to ensure its safety. But it has higher demands on people.

It also gives developers an Unsafe boundary, which is actually a security boundary. It explicitly marks the minefields in your code. If you review in the team code, you can find problems faster. This in itself is a kind of security. In contrast to C++, every line of code you write is Unsafe, because it does not have such obvious boundaries (Unsafe blocks) as Rust.


  • The following are five simple specifications for using Unsafe that I have summarized, so that everyone can make trade-offs:

  • Use Safe Rust if you can use Safe Rust;

  • Unsafe Rust can be used for performance;

  • When using Unsafe Rust, make sure not to generate UB, and try to judge its security boundary, and abstract it as a Safe method;

  • If it cannot be abstracted as Safe, it needs to be marked as Unsafe and accompanied by a conditional document for generating UB;


For Unsafe code, you can focus on review.

mdnice editor

productive forces

productive forces


  • Programming language productivity can be evaluated by the following three aspects:

  • learning curve.

  • Language Engineering Capabilities.

  • domain ecology.


learning curve



  • The learning curve is high or low, depending on the individual level. The following is a list of places that should be paid attention to for different basic learning Rust.


  • Completely zero-based developers: master the knowledge structure of the basic computer system, understand the abstraction of Rust language and hardware/OS layer, understand the core concepts of Rust language and its abstract mode, and choose a certain applicable field of Rust language for practical training. Improve the proficiency and depth of understanding of the Rust language through practice, while mastering domain knowledge.


  • C language foundation: Since C language developers do not have a good understanding of the abstraction of high-level languages, they focus on understanding and mastering Rust's ownership mechanism, including the semantics of ownership, life cycle and borrow checks. Understand the abstract patterns of the Rust language, mainly types and traits; and the OOP and functional language features of Rust itself.


  • C++ foundation: C++ developers have a good understanding of the ownership of the Rust language, and focus on Rust's abstract patterns and functional language features.


  • Have a Java/Python/Ruby foundation: focus on understanding and overcoming Rust's ownership mechanism, abstract patterns, and functional programming language features.


  • Go foundation: It is easier for Go language developers to understand Rust's type and trait abstraction patterns, but Go is also a GC language, so the ownership mechanism and functional language features are the focus of their learning.


Have a Haskell foundation: Haskell developers have a good understanding of the functional features of the Rust language, and mainly overcome the ownership mechanism and OOP language features.

Therefore, for developers with a certain foundation, several key concepts to master in learning the Rust language are:

1. Rust ownership mechanism, including ownership semantics, life cycle and borrow checking


  • The ownership mechanism is the core feature of the Rust language. It guarantees memory safety without a garbage collection mechanism. Therefore, for developers who are accustomed to GC, understanding the ownership of Rust is the most critical part. Remember these three points:

  • Every value in Rust has a variable called its owner.

  • A value has one and only one owner.


When the owner (variable) goes out of scope, this value will be discarded. This also involves concepts such as life cycle and borrow check, which is a relatively hard nut to crack.

2. The abstract mode of Rust language, mainly types and traits. Trait borrows from Typeclass in Haskell. It is an abstraction of type behavior and can be popularly compared to interfaces in other programming languages. It tells the compiler which functional language features a type must provide. Consistency should be followed when using, and conflicting implementations cannot be defined.

3. OOP language features. Familiar with the four common features of object-oriented programming (OOP): object, encapsulation, inheritance, and polymorphism, you can better understand some features of Rust, such as impl, pub, trait, etc.

4. Functional language features. The design of the Rust language is deeply influenced by functional programming. Seeing the functional features, people who are not good at mathematics may be discouraged, because the biggest feature of the functional programming language is to write the calculation process as a series of nested function calls as much as possible. In Rust, mastering closures and iterators is an important part of writing high-performance Rust code in a functional language style.

Language Engineering Capabilities

Rust is ready for developing industrial-strength products.

To ensure safety, Rust introduces a strong type system and ownership system, which guarantees not only memory safety, but also concurrency safety without sacrificing performance.

In order to ensure support for hard real-time systems, Rust borrows deterministic destructors, RAII, and smart pointers from C++ for automatic and deterministic memory management, thereby avoiding the introduction of GC, so there will be no "world pause" problem. Although these items are borrowed from C++, they are more concise to use than C++.

In order to ensure the robustness of the program, Rust re-examined the error handling mechanism. There are generally three types of abnormal situations in daily development: failures, errors, and exceptions. However, in a process-oriented language like C, developers can only handle errors through statements such as return values ​​and goto, and there is no unified error handling mechanism. Although high-level languages ​​such as C++ and Java introduce exception handling mechanisms, they do not specifically provide syntax that can effectively distinguish between normal logic and error logic, but only handle them globally, causing developers to treat all abnormal situations as exceptions. It is not conducive to the development of a robust system. And exception handling will also bring relatively large performance overhead.


  • The Rust language provides special processing methods for these three types of abnormal situations, allowing developers to choose according to the situation.

  • For failure cases, assertion tools are available.

  • For errors, Rust provides a layered error handling method based on the return value. For example, Option can be used to handle situations where there may be null values, while Result is specially used to handle errors that can be reasonably resolved and need to be propagated.


For exceptions, Rust regards them as problems that cannot be reasonably solved, and provides a thread panic mechanism. When an exception occurs, the thread can exit safely.

Through such an exquisite design, developers can reasonably handle abnormal situations at a finer granularity, and finally write a more robust system.

To provide flexible architectural capabilities, Rust uses traits as the basis for zero-cost abstractions. Traits are oriented towards composition rather than inheritance, giving developers the flexibility to architect tightly coupled and loosely coupled systems. Rust also provides generics to express type abstraction. Combined with trait features, Rust has the ability of static polymorphism and code reuse. Generics and traits allow you to flexibly use various design patterns to reshape the system architecture.

In order to provide powerful language extension capabilities and development efficiency, Rust introduces a macro-based metaprogramming mechanism. Rust provides two types of macros, declarative macros and procedural macros. The form of declaring a macro is similar to that of C's macro substitution, the difference is that Rust will check the code after macro expansion, which is more advantageous in terms of security. Procedural macros give Rust powerful capabilities in code reuse and code generation.

In order to integrate well with the existing ecosystem, Rust supports a very convenient and zero-cost FFI mechanism, is compatible with C-ABI, and divides the Rust language into Safe Rust and Unsafe Rust from the language architecture level. Among them, Unsafe Rust specializes in dealing with external systems, such as the operating system kernel. The reason for this division is that the inspection and tracking of the Rust compiler is limited. It is impossible to check the security status of other external language interfaces, so it can only be guaranteed by the developers themselves. Unsafe Rust provides unsafe keywords and unsafe blocks, which explicitly distinguish safe code from unsafe code accessing external interfaces, and also provide convenience for developers to debug errors. Safe Rust means that developers will trust the compiler to ensure safety at compile time, while Unsafe Rust means that the compiler will trust the developer's ability to ensure safety.

Where there are people, there are bugs. Through the exquisite design of the Rust language, the parts that the machine can check and control are handed over to the compiler for execution, while the parts that the machine cannot control are handed over to the developers themselves. What Safe Rust guarantees is that the compiler maximizes memory safety at compile time and prevents undefined behavior from happening. Unsafe Rust is used to remind developers that the code developed at this time may cause undefined behavior, please be careful! Humans and compilers share the same "security model", trust each other, and harmonize with each other, so as to maximize the elimination of the possibility of human bugs.

In order to make it easier for developers to collaborate with each other, Rust provides a very useful package manager Cargo[13]. Rust code is compiled and distributed based on the package (crate). Cargo provides many commands to facilitate developers to create, build, distribute, and manage their own packages. Cargo also provides a plug-in mechanism, which is convenient for developers to write custom plug-ins to meet more needs. For example, the official rustfmt and clippy tools can be used to automatically format the code and find "bad smells" in the code respectively. For another example, the rustfix tool can even help developers automatically fix erroneous code based on compiler suggestions. Cargo also naturally embraces the open source community and Git, and supports one-click publishing of written packages to the crates.io website for others to use.


  • In order to make it easier for developers to learn Rust, the official Rust team has made the following efforts:

  • Separate a dedicated community working group, write the official Rust Book, and other documents of different depths, such as compiler documentation, nomicon book, etc. Even organize a free community teaching event Rust Bridge, strongly encourage community blogging, and more.

  • The documentation of the Rust language supports the Markdown format, so the Rust standard library documentation is rich in expression. The expressiveness of documentation for many third-party packages in the ecosystem has also been improved.

  • Provides a very useful online Playground tool for developers to learn, use and share code.

  • The Rust language has been bootstrapped very early, which is convenient for learners to understand its internal mechanism by reading the source code, and even participate in the contribution.

  • The Rust core team has been continuously improving Rust, committed to improving the friendliness of Rust, trying to reduce the mental burden of beginners, and slow down the learning curve. For example, introducing the NLL feature to improve the borrow checking system allows developers to write more intuitive code.

  • Although it has borrowed a lot of content related to the type system from Haskell, the Rust team will deliberately de-academic when designing and promoting language features to make the concept of Rust more accessible to the people.

  • Based on the type system, it provides support for mixed programming paradigms, provides powerful and concise abstract expression capabilities, and greatly improves the development efficiency of developers.


In order to facilitate Rust developers to improve development efficiency, the Rust community also provides powerful IDE support. VSCode/Vim/Emacs + Rust Analyzer has become the standard for Rust development. Of course, IDEA/Clion of the JetBrains family also has strong support for Rust.

mdnice editor

Rust and open source

As an open source project itself, the Rust language is also a shining pearl in modern open source software.

All languages ​​born before Rust were only used for commercial development, but the Rust language has changed this situation. For the Rust language, the Rust open source community is also part of the language. At the same time, the Rust language also belongs to the community.

The Rust team is made up of Mozilla and non-Mozilla members, and there have been more than 1,900 contributors to the Rust project to date. The Rust team is divided into the core group and other domain working groups. For the goals of Rust 2018, the Rust team is divided into the embedded working group, the CLI working group, the network working group and the WebAssembly working group, as well as the ecosystem working group and community work group etc.

Designs in these fields will first go through an RFC process. For some changes that do not need to go through the RFC process, you only need to submit a Pull Request to the Rust project library. All processes are transparent to the community, and contributors can participate in the review. Of course, the final decision-making power belongs to the core group and related field working groups. Later, in order to streamline the FCP process, MCP was also introduced.

After the establishment of the Rust Foundation, the Rust team is also constantly exploring new open source governance solutions.

mdnice editor

Weaknesses of the Rust language


  • While Rust has many advantages, it certainly has some disadvantages.

  • Rust compilation is slow. Although Rust officials have been improving the speed of Rust compilation, including incremental compilation support, the introduction of a new compilation backend (cranelift), parallel compilation and other measures, it is still slow. And incremental compilation currently has bugs.

  • The learning curve is steep.

  • Various detection tools for memory unsafety issues specific to the Rust language are lacking.


mdnice editor

Rust ecological base library and tool chain

The Rust ecosystem is becoming more and more abundant. Many basic libraries and frameworks will be released to crates.io[14] in the form of crates. Up to now, there are 62,981 crates on crates.io, and the total downloads have reached 7,654,973,261 times.


  • Classified by package usage scenarios, the most popular scenarios of Crates.io are as follows:

  • Command Line Tools (3133 crates)

  • no-std library (2778 crates)

  • Development tools (testing/debug/linting/performance detection, etc., 2652 crates)

  • Web Programming (1776 crates)

  • API binding (specific api packaging for Rust, such as http api, ffi related api, etc., 1738 crates)

  • Network Programming (1615 crates)

  • Data Structures (1572 crates)

  • Embedded Development (1508 crates)

  • Encryption technology (1498 crates)

  • Asynchronous development (1487 crates)

  • Algorithms (1200 crates)


Scientific computing (including physics, biology, chemistry, geography, machine learning, etc., 1100 crates)

In addition, there are other categories such as WebAssembly, encoding, text processing, concurrency, GUI, game engine, visualization, template engine, parser, operating system binding, and many libraries.

Commonly used well-known basic libraries and tool chains



  • Among them, many excellent basic libraries have emerged, which can be seen on the homepage of crates.io. Here are a few:


  • Serialization/Deserialization: Serde[15]


  • Command-line development: clap[16]/structopt[17]


  • Async/Web/Network Development: tokio [18] / tracing [19] / async-trait [20] / tower [21] / async-std [22] tonic [23] / actix-web [24] / smol [25] ]/surf[26]/async-graphql[27]/warp/[28] tungstenite[29]/encoding_rs[30]/loom[31]/Rocket[32]


  • FFi development: libc [33]/ winapi [34]/ bindgen [35]/ pyo3 [36]/ num_enum [37]/ jni [38]/ rustler_sys[39]/ cxx [40]/ cbindgen [41]/ autocxx- bindgen [42]


  • API development: jsonwebtoken [43]/ validator [44]/ tarpc [45]/ nats [46]/ tonic[47]/ protobuf [48]/ hyper [49]/ httparse [50]/ reqwest [51] / url [ 52]


  • Parsers: nom[53]/pest[54]/csv[55]/combine[56]/wasmparser[57]/ron[58]/lalrpop[59]


  • WebAssembly:   wasm-bindgen[67]/ wasmer [68]/ wasmtime [69]/ yew [70]


  • Cryptography: openssl [60] / ring [61] / hmac [62] / rustls [63] / orion [64] / themis [65] / RustCrypto [66]


  • Database development: diesel [71]/ sqlx [72]/ rocksdb [73]/ mysql [74]/ elasticsearch [75]/ rbatis [76]


  • Concurrency: crossbeam [77]/ parking_lot [78]/ crossbeam-channel [79]/ rayon [80]/ concurrent-queue[81]/ threadpool [82] / flume [83]


  • Embedded development: embedded-hal [84]/ cortex-m [85]/ bitvec [86]/ cortex-m-rtic [87]/ embedded-dma [88]/ cross [89]/ Knurling Tools[90]


  • Test: static_assertions [91] / difference [92] / quickcheck [93] / arbitrary [94] / mockall [95] / criterion [96] / proptest [97] / tarpaulin [98] / fake-rs [99]


  • Multimedia development: rust-av[100]/ image[101]/ svg[102]/ rusty_ffmpeg[103]/ Symphonia[104]/"rapier") / Rustcraft[115] Nestadia[116]/ naga[117]/ Bevy Retro[118]/ Texture Generator[119] / building_blocks[120] / rpg-cli [121]/ macroquad[122]


  • TUI/GUI development: winit [123]/ gtk [124]/ egui [125]/ imgui [126]/ yew [127]/ cursive [128]/ iced [129]/ fontdue [130]/ tauri [131]/ druid [132]


mdnice editor

Rust Industry Application Inventory

Rust is a general-purpose high-level system-level programming language, and its application fields can basically cover the application fields of C/Cpp/Java/Go/Python at the same time.

Let's take an inventory of Rust projects at home and abroad in different fields. By providing data related to the amount of code, team size, and project cycle, I hope that everyone can have a more intuitive understanding of the application and development efficiency in the Rust field.

mdnice editor

data service

The field of data services includes databases, data warehousing, data streams, big data, etc.

Keywords: database / distributed system / CNCF

introduce

introduce

TiKV [133] is an open source distributed transactional Key-Value database, focusing on providing reliable, high-quality, and practical storage architecture for next-generation databases. Initially developed by the PingCAP team, TiKV has been launched and applied in Zhihu, Yidian, Shopee, Meituan, JD Cloud, Zhuanzhuan and other leading enterprises in many industries.

TiKV uses the Raft consensus algorithm to achieve consistency between multiple copies of data. The RocksDB storage engine is used locally to store data. At the same time, TiKV supports automatic data segmentation and migration. TiKV's inter-bank transactions initially refer to the Google Percolator transaction model, and have made some optimizations to provide snapshot isolation and snapshot isolation with locks, and support distributed transactions.

In August 2018, it was announced by CNCF that it was accepted as a sandbox cloud-native project. In May 2019, it was promoted from the sandbox to an incubation project.

Code and team size

The TiKV project contains about 300,000 lines of Rust code (including test code).


  • TiKV is a global open source project, and the team size can be viewed from the list of contributors [134]. The TiKV organization also includes some Go/Cpp projects, this is not included, only the number of manpower involved in the Rust project is counted.

  • Main development: about 20 people.


Community contributions: more than 300 people.

Project Cycle

TiKV is the underlying storage of TiDB following the evolution of TiDB. TiDB is developed for Go and TiKV is developed for Rust.

In January 2016, it was designed and developed as the underlying storage engine of TiDB.

The first version was released as open source in April 2016.

On October 16, 2017, TiDB released the GA version (TiDB 1.0), and TiKV released 1.0.

On April 27, 2018, TiDB released version 2.0 GA, and TiKV released version 2.0.

On June 28, 2019, TiDB released version 3.0 GA, and TiKV released version 3.0.

On May 28, 2020, TiDB released 4.0 GA version, and TiKV released 4.0.

On April 07, 2021, TiDB released version 5.0 GA, and TiKV released version 5.0.

comment

Some friends may be more concerned about the development efficiency of Rust and want to quantify it, especially to compare the development efficiency of other languages ​​​​such as C/Cpp/Go.

I personally think that it is very difficult to quantify development efficiency, especially compared with other languages. We might as well look at this matter from another angle, for example, from the perspective of agile project iteration management. If a language can meet the daily needs of agile development iterations and help complete product evolution, that is enough to explain the development efficiency of this language.

It is understood that the number of Go developers in PingCAP is four to five times that of Rust developers, and of course the workload is almost the same. From the above data, we can see that the Rust project (TiKV) can still keep pace with the iterative rhythm of the Go project (TiDB), which shows that the development efficiency of Rust is still sufficient to meet the needs of modern development.

Keywords: real-time data warehouse / entrepreneurship / angel round

introduce

introduce

TensorBase[135] is an entrepreneurial project launched by Dr. Jin Mingjian in August 2020. Starting from a new modern perspective, using open source culture and methods, it rebuilds a real-time data warehouse under Rust to serve the era of massive data. Data storage and analysis. The TensorBase project has received angel round investment from a world-renowned venture capital accelerator.

Code and team size

Because TensorBase is built on top of Apache Arrow[136] and Arrow DataFusion[137], the code statistics exclude the dependencies of these two projects.

TensorBase core code lines are more than 54000 lines.


  • Team size:

  • Main development: 1 person.


Community contributions: 13 people.

Because it is a new project, the open source community is still under construction.

Project Cycle

TensorBase publishes on a time-based basis, not a semantic version. The iteration cycle is expected to be a major version in one year and a minor version in January.

From the official release on April 20, 2021 to the latest June 16, keep this rhythm.

Keywords: Dataflow/ Distributed System/ Entrepreneurship

introduce

introduce

Timely Dataflow[138] is a modern Rust implementation based on this Timely Dataflow paper from Microsoft: "Naiad: A Timely Dataflow System" [139]. It is an open source product of the company clockworks.io[140].

It is very difficult to perform complex processing on streaming data in a distributed system, such as multiple iterations or incremental calculations. Storm, Streaming Spark, and MillWheel are not well adapted to the complex needs of various applications. By introducing the concept of timestamp, Naiad provides a very low-level model that can be used to describe arbitrarily complex stream computing.

The dataflow system is all-encompassing, and MapReduce and Spark can be regarded as representatives. Timely dataflow provides a completely time-based abstraction, unifying stream computing and iterative computing. Timely Dataflow can be used when you need parallel processing of streaming data and iteration control.

Code and team size

The Rust code size is about 13000 lines.


  • Team size:

  • Main developers: 4 people.


Community contributions: more than 30 people.

Project Cycle

September 7, 2017, version 0.3.0.

June 28, 2018, version 0.6.0.

September 16, 2018, version 0.7.0.

December 3, 2018, version 0.8.0.

March 31, 2019, version 0.9.0.

July 10, 2019, version 0.10.0.

March 10, 2021, version 0.12.0.

Basically, a small version is released every three months. In addition to Timely Dataflow, the team also maintains a Differential Dataflow[141] built on top of Timely Dataflow, which iterates synchronously with Timely Dataflow.

Keywords: database/ academic paper project

introduce

introduce

Noria [142] is a new streaming dataflow system designed as a fast storage backend for heavy-duty web applications based on MIT Jon Gjengset's [143] PhD dissertation [144], also referenced by OSDI'18 Paper [145]. It is similar to a database, but supports precomputing and caching relational query results in order to speed up queries. Noria automatically keeps cached results as underlying data, stored in persistent underlying tables. Noria uses partially stateful dataflows to reduce memory overhead and supports dynamic, runtime dataflows and query changes.

Code and team size

The number of lines of Rust code is about 59000 lines.


  • Team size:

  • Main contributors: 2 people


Community Contributors: 21

Project Cycle

Because it is a personal academic research project, the release cycle is not so obvious.

The project cycle is from July 30, 2016 to April 30, 2020, with a total of more than 5,000 commits.

Vector (foreign/open source/data pipeline)

Keywords: Data Pipeline / Distributed Systems / Entrepreneurship

Vector[146] is a high-performance, end-to-end (broker and aggregator) observability data pipeline built by Timer. It's open source and 10x faster than all alternatives in the space (Logstash, Fluentd, etc.). Currently companies like Douban, checkbox.ai, fundamentei, BlockFi, Fly.io, etc. use Vector. Click here [147] for the official performance report, and here [148] for companies using Vector in production.

Code and team size

The amount of code is about 180,000 lines of Rust code.


  • Team size:

  • Main development: 9 people


Community contributions: 140 people

Project Cycle

On March 22, 2019, the initial version was released.

On June 10, 2019, version 0.2.0 was released

July 2, 2019, version 0.3.0 released

September 25, 2019, version 0.4.0 released

On October 11, 2019, version 0.5.0 was released

On December 13, 2019, version 0.6.0 was released

On January 12, 2020, version 0.7.0 was released

On February 26, 2020, version 0.8.0 was released

April 21, 2020, version 0.9.0 released

On July 23, 2020, version 0.10.0 was released

On March 12, 2021, version 0.11.0 ~ 0.12 will be released

April 22, 2021, version 0.13.0 released

June 3, 2021, version 0.14.0 released

Arrow-rs (foreign/open source/big data standard)

Keywords: big data/data format standard/Apach

arrow-rs[149] is a Rust implementation of Apache Arrow. Apache Arrow is an in-memory column storage data format standard suitable for heterogeneous big data systems. It has a very big vision: to provide a development platform for in-memory analytics, allowing data to move and process faster between heterogeneous big data systems.

Arrow has introduced Rust[150] since version 2.0, and since version 4.0, the Rust implementation has migrated to the independent warehouse arrow-rs.


  • Arrow's Rust implementation actually consists of several different projects, including the following individual crates and libraries:

  • arrow[151], the arrow-rs core library, included in arrow-rs.

  • arrow-flight [152], one of the arrow-rs components, included in arrow-rs.

  • parquet[153], one of the arrow-rs components, is included in arrow-rs. In the big data ecosystem, Parquet is the most popular file storage format.

  • DataFusion [154], a scalable in-memory query execution engine, uses Arrow as its format.


Ballista [155], a distributed computing platform powered by Apache Arrow and DataFusion, is included in DataFusion.

Code and team size

Adding up the related components of arrow-rs, the amount of Rust code is about 180,000 lines.



  • Team size:


  • Main developers: about 10 people


Community contributions: more than 550 people

Project Cycle

The project DataFusion started to build in 2016 and later entered the Apache Arrow project.

Starting with arrow-rs 4.0:

On April 18, 2021, version 4.0 was released.

On May 18, 2021, version 4.1 was released.

On May 30, 2021, version 4.2 was released.

On June 11, 2021, version 4.3 was released.

InfluxDB IOx (foreign / open source / time series database)

Keywords: time series database / distributed

InfluxDB IOx[156], InfluxDB's next-generation timing engine, is rewritten using Rust + Aarow.


  • The existing design mainly has the following fatal problems:

  • Unable to solve the problem of timeline dilation

  • In the cloud-native environment, the memory management requirements are relatively strict, which means that mmap is no longer applicable, and InfluxDB needs to support the operating mode without local disks


Since the index and data are stored separately, efficient data import and export functions are difficult to achieve

The above three issues are the core of the existing design, so rewriting is a better choice to support the current needs.

Code and team size

InfluxDB IOx code size is about 160,000 lines of Rust code.


  • Team size:

  • Main development: 5 people


Community contributions: 24 people

Project Cycle

The project started in November 2019, but as of today the project is very early, it is not ready for testing, nor does it have any builds or documentation.

But judging from the status of GitHub activity, the development status is still very active. Major development work is expected to begin in 2021.

Keyword: time series database

introduce

introduce

CeresDB is a TP/AP fusion time-series database developed by Ant Group, which meets the needs of storage of massive time-series data, multi-dimensional query drill-down and real-time analysis in financial timing, monitoring, IOT and other scenarios. There is an open source plan, but it is not yet open source.

team size

At present, there are about 8-10 people in database development.

Other information is unknown.

Tantivy (foreign/open source/full-text search)

Keyword: full text search / lucene

tantivy[157] is a full-text search engine library inspired by Apache Lucene, implemented in Rust.

Tantivy is great, here's an app built on Rust + Tantivy + AWS: Serves a billion web searches and generates a cloud of common words [158].

Code and team size

The code size is about 50000 lines of Rust code.


  • Team size:

  • Main development: 1 person


Community contribution: 85 people

Project Cycle

The project was established in 2016, and the iterative cycle is an average of one minor version release per month. It is currently released to version 0.15.2.

Keyword: Zhihu / lucene

introduce

introduce

Rucene[159] is a Rust-based search engine open sourced by the Zhihu team. Rucene is not a complete application, but a code library and API that can be easily used to add full text search capabilities to applications. It is a Rust port of the Apache Lucene 6.2.1 project.

Code and team size

The code size is about 100,000 lines of Rust code.


  • Team size:

  • Main development: 4 people


Community contribution: 0 people

It may be because the company's internal project is open source, and no specific semantic version has been iterated at present. It is used in the production environment in Zhihu.

mdnice editor

cloud native

Cloud native fields include: confidential computing, Serverless, distributed computing platforms, containers, WebAssembly, operation and maintenance tools, etc.

StratoVirt (domestic/open source/container)

Keywords: container / virtualization / Serverless

StratoVirt[160] is a next-generation Rust-based virtualization platform developed by the Huawei OpenEuler team.

Strato, taken from stratosphere, means the stratosphere in the earth's atmosphere, the atmosphere can protect the earth from the external environment, and the stratosphere is the most stable layer in the atmosphere; similarly, virtualization technology is on the operating system platform The isolation layer can not only protect the operating system platform from being damaged by upper-level malicious applications, but also provide a stable and reliable operating environment for normal applications; the name Strato means a thin and light protection layer that protects the smooth operation of services on the openEuler platform. At the same time, Strato also carries the vision and future of the project: lightweight, flexible, safe and complete protection capabilities.

StratoVirt is an enterprise-level virtualization platform for cloud data centers in the computing industry. It realizes a unified architecture that supports virtual machines, containers, and serverless scenarios. It has key technology competitions in light weight, low noise, software and hardware collaboration, and security. Advantage. StratoVirt reserves the ability and interface of component assembly in architecture design and interface. StratoVirt can flexibly assemble advanced features on demand until it evolves to support standard virtualization, and finds the best balance between feature requirements, application scenarios, and lightness and dexterity. point.

Code and team size

The code size is about 27000 lines of Rust code.


  • Team size:

  • Main developers: 4 people.


Community contributions: 15 people.

Project Cycle

2020-09-23, release version 0.1.0.

2021-03-25, release version 0.2.0.

2021-05-28, release version 0.3.0.

Firecracker (foreign/product)

Safety
Welcome to Join Odaily Official Community