Risk Warning: Beware of illegal fundraising in the name of 'virtual currency' and 'blockchain'. — Five departments including the Banking and Insurance Regulatory Commission
Information
Discover
Search
Login
简中
繁中
English
日本語
한국어
ภาษาไทย
Tiếng Việt
BTC
ETH
HTX
SOL
BNB
View Market
Foresight Ventures: Avatar will become the most valuable asset in the Metaverse and Web3
Foresight
特邀专栏作者
2023-01-05 12:00
This article is about 9995 words, reading the full article takes about 15 minutes
Avatar can provide a stronger sense of substitution for the user's metaverse experience and satisfy people's emotional needs to connect.

first level title

TL;DR

We believe that Avatar will be the most valuable asset in the Metaverse, which can provide a stronger sense of substitution for the user experience of the Metaverse and satisfy people's emotional needs to connect.

However, most games and virtual worlds are still closed economies at present, and Avatar's identity and value system still cannot communicate with each other.

To build a successful, mass-adopted Avatar, we can consider the following three aspects:

  • Technology—Determine the static shape and dynamic rendering effect of the Avatar

  • Identity — increasing Avatar interoperability and social value

  • Community & Market—Strong consensus is the key factor to realize Mass adoption

first level title

Intro: Exploring the Metaverse

first level title

(SCENE FROM WESTWORLD BY HBO)

The Rise of the Avatar Economy

The part that attracts me the most among these fantasy stories is Avatar—everyone can design and obtain an avatar of themselves in the virtual world, which can deposit your personal behavior, chain activities, asset information, reputation and achievements, making the Metaverse From unreal to real, it also makes the interaction between players more colorful. As a crypto VC, we believe that the virtual Avatar will become one of the most valuable assets in the Metaverse.

image description

image description

image description

In order to ensure the integrity and depth of the research, we sorted out four application scenarios in the entire avatar industry: 1. Replacement of real people 2. Experience upgrade 3. Virtual IP 4. Second identity

a. Substitute for a real person

image description

(VIRTUAL HOST BY HOUR-ONE)

b. Experience upgrade

image description

(SCENE FROM BLADE RUNNER)

c. Virtual IP

image description

(LVMH Creates Virtual Ambassador for the 2022 Innovation Award)

d. Second identity

image description

(ARIANA GRANDE’ SKINS IN THE GAME FORTNITE) 

Recently, more and more articles discuss Avatar self, which is a concept in the philosophy of mind, which aims to study the relationship between people and avatar. Avatar-self is the conscious, conscious and inner coherent existence. Some interesting findings are as follows:

  • Compared with traditional game characters, players tend to be emotionally attached to Personalized Avatars, and this emotional attachment will increase the time spent playing games and increase their willingness to pay. (Harvard Business Review)

  • Avatars shape human connection and compassion. If people have used avatars of different ethnicities, they will be more able to empathize with that ethnicity. (Health Games Magazine, Belinda Gutierrez)

  • Using Avatar can increase productivity by 10-15% (World Bank) and reduce communication costs by 20-30% (McKinsey & Company)

Problems facing Avatars

Interoperability has long been a key tenet of the Metaverse. Today, however, many games and virtual worlds are closed economies that cannot interoperate with each other. From Fortnite to Minecraft to League of Legends, most games do not allow players to trade outside of their economic system, or transfer game assets across chains.

first level title

How to build a successful Avatar in Web3?

Assuming we build an Avatar from scratch, three basic things should be considered:

  • technical layer

  • identity layer

  • Community & Market

a. Technical layer

The production of 3D avatar includes four parts: image generation, motion capture, rendering and interaction.

  • image generation

The birth of the static shape mainly depends on various modeling techniques. The current modeling methods mainly include the following: 3D software modeling, instrument acquisition modeling, and automatic modeling.

Modeling method 1: 3D software modeling

It refers to artificially shaping a 3D model through 3D modeling software. This method takes a long period of manual production, but the effect is controllable. It is currently the most widely used modeling method.

There are several types of commonly used 3D modeling software: Traditional 3D modeling: 3Dmax, Maya, blender, etc. Sculpting software: zbrush, blender, etc. Programmatic modeling: Houdini, etc. Among them, the traditional 3D software is mainly responsible for the production of low-model, and the engraving software can assist in the production of high-model. Due to space constraints, I will not expand too much here. Less, faster running speed; high poly model is just the opposite, more faces, better visual effect, but it takes up more resources and is easy to get stuck. In the modern modeling process, the "baking" method is generally used. Simply put, the underlying structure is low-poly, but high-poly textures are pasted on the low-poly surface, similar to "wolves in sheep's clothing", to achieve An effect that looks great visually and runs fast.

Modeling Method 2: Instrument Acquisition Modeling

Compared with manual modeling, instrument acquisition modeling is carried out by means of instrument scanning. The cost of this method is relatively high, and it is generally used in fields such as film and television special effects production.

The instrument acquisition modeling technology is divided into static scanning modeling and dynamic light field reconstruction:

  • Static scanning model technology is currently the mainstream, which can be subdivided into structured light scanning reconstruction and camera array scanning reconstruction.

  • Dynamic light field reconstruction technology is currently the focus of development. It can not only reconstruct the geometric model of the character, but also obtain the dynamic character model data at one time, and reproduce the light and shadow effects of viewing the human body from different angles with high quality, with high visual fidelity. Spend.

Modeling method three: automatic modeling

Automated modeling mainly includes the following methods:

  • Image Acquisition Modeling: Restoring the 3D Structure of Faces by Acquiring Photos

  • AI modeling: using AI algorithms to directly generate modeling methods for faces, bodies, etc.

Automated modeling technology is not yet particularly mature, and the modeling results are still a long way from direct commercial use. However, this type of technology will greatly reduce the labor and time costs of modeling. At present, some tool platforms that support virtual human creation have emerged, such as Nvidia's Omniverse Avatar, Epic Unreal's MetaHuman Creator, etc.

  • motion capture

image description

(3D MOTION CAPTURE) The driving technology is divided into: 1. Human driving and 2. Intelligent driving.

Real-person driving refers to the movement and facial expression data of real actors collected by 3D motion capture technology, and then these data are migrated and synthesized onto virtual digital humans. In recent years, the capture technology based on CV computer vision has developed rapidly. Facial expression capture can collect 3D point-matrix cloud images of real people's faces through depth-of-field cameras, and transfer facial movements and expressions to avatars in real time. Intelligent driving requires the use of AI deep learning and information collection to generate models, and then simulate and generate human action patterns. Motion-driven technology includes manual adjustment of key frames, prefabricated motion, motion capture, intelligent synthesis (text/voice-driven), etc.

  • rendering

image description

image description

There are mainly two types of offline and real-time rendering. Offline rendering is mainly used in film and television, and representative software includes Maya, 3DMax, etc.

interact

  • interact

So far, the shape of Avatar has been built. If you want him to have a deeper interaction with humans, you can use TTS technology, NPL (Natural Language Processing), voice intelligent interaction, synthetic voice technology, and artificial intelligence technology plus continuous training. , so that avatar has general interaction capabilities, and even allows avatar to obtain self-learning, AIGC and other capabilities through knowledge graphs, business question-and-answer libraries, and conversational engineering engines. The recently popular ChatGTP is a good demonstration of AI language interaction capabilities. In terms of presentation methods, most virtual images are mainly pictures, videos, live broadcasts, etc. In the future, VR equipment/holographic projection is expected to provide richer props and software and hardware foundations for the projection of digital virtual humans in the real world. Due to the differences in the latency of each scenario (such as live broadcast and other real-time scenarios that require low latency, but content generation scenarios do not have this requirement), driving methods (computing-driven has extremely high requirements for the deep learning capabilities of the model), etc., the requirements for technology and operations big different. In addition, the expansion of metaverse, large-scale games and online scenes (zoom meeting, google meet, vrchat) has also greatly enriched the usage scenarios and value of avatar.

b. Identity layer

Interoperable decentralized identities add social value to Avatar and broaden application scenarios.

  • Interoperable 3D Identity

Avatar identity can be used as a universal passport for users to display personality, achievements and social reputation. Players can easily link on-chain SBT with Avatar to aggregate into a 3D on-chain reputation system. We believe that interoperability will be another key factor driving avatars to gain Mass adoption.

  • From 2D domain name to 3D virtual identity

With the development of social networks and infrastructure, the form of identity display has been continuously enriched and evolved. In web2, from website domain names to qq shows to game skins; in web3, we also believe that there will be new forms of presentation, such as decentralized text domain names represented by ENS, .bit, Space ID, etc., and soul tokens SBT (such as Binance BAB) etc. are becoming the main carrier of DID, but text is a single-dimensional feature, we do not think this will be a final form of DID. The added value of the text is low (especially cultural attributes), there are few derivative scenes, lack of personalized display, lack of editability, etc. Through the introduction of 3D Avatar ID, users have upgraded from the initial pursuit of personalization of text (combination of numbers and letters) to personalization of visual elements. At the same time, 3D Avatar has further introduced cultural attributes into DID. By binding social graphs, The Avatar identity of prestige, reputation, assets and other elements can give full play to the composability to authorize and log in various Web3 applications (even Web2 applications).

  • Web3 Super Entry

Having a universal avatar in the metaverse may become the first stop for ordinary users to enter the web3 world experience in the future, and develop a gamification mechanism similar to Sims to guide novices to use defi, nft products, instead of obscure chain interaction help web3 goes mainstream.

c. Community & Marketing

Speaking of avatar from an operational point of view, most of the success of the virtual human project is due to the community and operations. Through brand & community, we can find the initial seed users, unite the community, enrich creative content and enhance the commercial value of avatar.

  • Hatsune Miku

As the world's first virtual singer-songwriter, Hatsune is a typical community-created idol and a typical case of UGC culture with a current market value of more than 10 billion yen. The Hatsune Miku character itself does not produce any content, all her characters, songs, and actions are artworks jointly created by community creators.

  • HALO nft

HALO nft presented in high-definition 3D Avatar, despite the recent market turmoil, its first total sales volume has reached 1930 eth, the floor price is 1.55 eth, and the listing rate is 3%.

The distributed community of artists behind the project incorporates an eclectic and multicultural mix of 3D artists from India, Russia, Jordan, and France... The HALO project team has partnerships with Disney, newspapers, video game developers, sculptors, filmmakers Experience in corporate and digital arts.

  • Liu Yexi

A new virtual idol named Liu Yexi made her debut on Douyin. She is a cyberpunk-toned, national-style demon catcher who knows how to make up in the metaverse. She has gained 2.3 million followers in 3 days on Tiktok. The video received more than 2 million likes, and beauty bloggers also launched the same makeup challenge one after another, forming an organic secondary communication.

first level title

Industry Mapping

Based on the above analysis, we believe that the Avatar industry can be subdivided into three levels:

  • Infrastructure (3D modeling software, game engines)

  • Platform (motion capture device, AI solution provider)

  • Application (Avatar Corporation)

secondary title

a. Ready-Player-Me

RPM can create beautiful 3D Avatars quickly and easily. Players can create personalized avatars by simply uploading a selfie (try it out on the official website) and instantly export the avatar to the partner experience. Avatars are fully customizable, and players can share creations on TikTok, Twitter, and Discord.

The RPM system can run across PC, Web and Mobile, and can be used by developers through a powerful SDK and API. Long term, RPM is building an interoperable identity protocol for the open Metaverse — enabling players and developers to bring identities and assets into 3D world experiences.

  • team introduction

Token Economics

  • Token Economics

There is currently no plan to issue tokens, and the main business model is SaaS.

  • Financing:

Financing:Ready Player Me has raised a total of $72.6 million in funding over seven rounds. Their latest funding was raised in a Series B round led by A16z on August 23, 2022.

Collaboration and integration:More than 3,000 applications on web2 and web3 have integrated RPM, including VRChat, Spatial, Somnium Space, IGG, Pixelynx, RTFKT, and many more.

b. Lifeform

Lifeform's core technologies include:

  • Ultra-realistic 3D Avatar creation tool built with UE 5. Users can create over 10 billion Avatar combinations.

  • SDK: Lifeform also provides a corresponding SDK for developers, providing various functions for DApp.

  • Avatar ID. At the heart of Lifeform Avatar is an NFT that can be connected to a wallet and act as a 3D virtual identity for users to enter different metaverses.

Avatar ID: Generate avatar - Live to earn in Lifeform ecosystem-generate on-chain behavioral data --claim Poaps and SBTs

In addition to the use of web3, users can also use their accounts or emails to log in to use 3D avatars in applications such as Google Meet, Zoom, Discord, and Tiktok.

  • Team Intro

Token Economics

  • Token Economics

There are clear plans for token economics and ecosystem development. The token hard cap is 1 billion, which is mainly divided into two parts: points and tokens. Tokens can be used to generate Avatars, equity liquidity, and more. Points are mainly used for sub-projects within the ecology and can be obtained through token pledge.

  • Financing:

Financing:Lifeform was invested in the seed round by Binance labs and is also one of the star projects of Korea Blockchain Week. They recently jointly launched an airdrop with Binance to release a joint airdrop event (click here to participate), and the number of avatar NFT owners reached 130,000

NFT operations:The Lifeform incubation team successfully launched HALO NFT in May, Halo NFT holders can use avatar in the Lifeform ecosystem

Interoperbility:At present, Lifeform has reached a cooperation with Particle Network. By integrating Particle, the Lifeform ecosystem will have a complete MPC-TSS multi-chain wallet (no need to save the private key), and it will support users to log in to the application using Web2 at a low cost and quickly. This component allows users to use decentralized applications without additional download of a separate wallet app, lowering the threshold for Web2 users to enter Web3.

ecosystem:Lifeform will release their first live-to-earn game; in the future will attract partners to develop more games, creating an interoperable game metaverse. For example, Lifeform SDK has been successfully connected to Burger Cities, and users can easily bind Avatar and wallet address to log in.

c. Genies

image description

(Genies 3D celebrity avatars -Cardi B, Rhianna, Justin Bieber)

Token Economics

(Genies platform showcasing a fashion collection designed by creator Ian Charms.)

  • Token Economics

Genies is a web2.5 project that allows people to buy digital assets using fiat currency and credit cards. There are no clear plans to issue tokens in the near future. The main source of income for the team is the 5% transaction fee in the NFT market

  • Financing:

Financing:On April 12, 2022, Genies raised a $150 million Series C round led by Silver Lake.

cooperate:In partnership with Universal Music Group and Warner Music Group, Genies has launched its own NFT marketplace, The Warehouse, to produce Avatars and digital wearable NFT products for its artists.

product update:first level title

AIGC & Metaverse

secondary title

a.AI is used to create images & 3D Avatar & NFT

secondary title

b. AI is used to create non-player characters (NPCs) to enrich the storyline

secondary title

c. AI is used to generate and manage the environment (physics/lighting/weather)

All 197 million square miles of Microsoft Flight Simulator's environments are primarily done through artificial intelligence, and Microsoft has partnered with blackshark.ai to generate infinitely realistic 3D worlds from 2D satellite images through AI.

In general, AI-generated content has the potential to be very valuable because it saves a lot of time and money while delivering high-quality content. However, there are also many who believe that AIGC can never replace human works of art because it lacks the creativity and emotional connection that only humans have.

first level title

Future Outlook

  • living in the metaverse

With the upgrading of VR AR infrastructure and the improvement of game ecology, the experience and ecology of Metaverse will become more complete and rich. We believe that in the future, human beings will spend more time truly experiencing life in an interoperable metaverse, and avatar is one of the most important assets.

  • From direct-to-consumer (D2C) to direct-to-Avatar (D2A) marketing

Avatars can help brands better connect with GenZ, the digital native generation. This shift in focus from traditional direct-to-customer marketing to avatar-based marketing can help brands better engage potential customers and provide more immersive experiences that increase conversion rates. Examples: Gucci, Nike, LVHM.

  • A metaverse where AI and humans coexist

The development of AIGC will have a profound impact on human beings, and as people become more comfortable interacting with AI, the line between human and machine will become more and more blurred. It may take some time for us to get used to interacting and co-existing with AI in the Metaverse. AVG games such as "Zelda", "Shenhai", "Castlevania", etc., the random content is mainly props, events, NPC and other content that can be explored, serving the experience of players "desire to explore the boundless world" appeal.

  • Acquaintance Relationships and the Rise of the Safe Space Network

In the future, high-frequency and simple interaction needs will be outsourced to AIGC, and the number of times people interact with artificial intelligence will rise sharply. In order to balance this, we will pay more attention to the establishment of a real-person relationship network, or set up a safe space in the Metaverse to ensure the authenticity of user communication.

  • 3D Interoperable ID Poised to Be the Next Wave of Innovation

first level title

Reference 

https://metaverseinsider.tech/2022/07/16/metaverse-avatars/#What_Is_A_Metaverse_Avatar

https://hbr.org/2006/06/avatar-based-marketing

https://mastersofmedia.hum.uva.nl/blog/2021/10/29/real-money-on-virtual-items-a-visual-analysis-of-fortnite-skins/

https://hbr.org/2006/06/avatar-based-marketing

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC 4559151/

https://a16z.com/2022/11/17/the-generative-ai-revolution-in-games/

https://medium.com/nico-s-ideas/an-avatar-and-a-digital-identity-may-be-the-first-step-towards-being-immortal-d2bd 8 a 2 2cd 2 

https://virtualworlds.substack.com/p/the-future-is-now-the-ai-gaming-revolution? sd=pf

Foresight Ventures
Metaverse
NFT
Web3.0
Welcome to Join Odaily Official Community