Risk Warning: Beware of illegal fundraising in the name of 'virtual currency' and 'blockchain'. — Five departments including the Banking and Insurance Regulatory Commission
Information
Discover
Search
Login
简中
繁中
English
日本語
한국어
ภาษาไทย
Tiếng Việt
BTC
ETH
HTX
SOL
BNB
View Market
a16z: An Overview of the Market Ecology Combining Generative AI and Gaming
星球君的朋友们
Odaily资深作者
2022-12-23 08:43
This article is about 5377 words, reading the full article takes about 8 minutes
This article will introduce a16z's investment layout in this track.

Original title: "The Generative AI Revolution in Games

Original title: "

Author: James Gwertzman, Jack Soslow

Original Compilation: Alpha Rabbit

A16Z recently wrote an interesting article, talking about what they think is the opportunity to combine generative AI and games. The author translated and annotated some of the content. The first part of the article has been issued, see: "a16z: Observations and Predictions of Generative AI in the Game Field", this article is the second part, including A16Z's judgment on the market ecology in the game + generative AI field (please note: these big Some of them are Portofolio of A16Z, please read them objectively and rationally).

Market Ecology Overview

The picture below shows the overall market ecology and describes the startups that A16Z has discovered in various categories. In these specific projects, we have discovered the impact of generative artificial intelligence (AIGC) on games. This article will discuss each The most characteristic companies and opportunities in the category are presented.

Generating flat images from text (2 D Images)

Generating 2D images from text cues is one of the broadest applications of generative artificial intelligence. Tools such as Midjourney, Stable Diffusion, and Dall-E 2 can directly generate high-quality 2D images from text descriptions, and they are used at multiple stages throughout the game development and production lifecycle.

(Note: Let me explain here that Midjourney is also an easy-to-use AI image generator, easier to use, and the image generation speed is very fast, and 4 images can be produced within 1 minute)

Concept Art

(Note: Concept art can also be called preliminary design. Usually, there is this professional design concept in the film and television or game industry. Generally speaking, it refers to setting a tone for the visual effect of the product. It should be said that it is a game or film. One of the original core work contents. Through brand-new design ideas and directions (including shape, spirit, concept, etc.), innovate or even subvert the past methods, and create new modeling characters or concepts. What is the difference from illustrations? ?The relationship between illustrators and games is more to help them draw posters, packaging covers, etc. One of the 2D art jobs that is really closely related to the work of film and television and game development is what we call Concept Art. What does it have to do with comics? The difference? Concept Art is very different from manga. Manga is an independent system in Japan, and it serves more for the visualization of popular light novels. Concept Art is for games, animation, and style There are no restrictions on the form of expression, and it is extremely important to create some advanced or a complete set of settings. What is the difference from the original painting? Concept Art includes almost all characters and scene creation. The difference from the original artist is that , concept designers must take the initiative to produce some more interesting designs.

Concept Art is different from the original artist. In games and animation projects, it plays an art-oriented decision-making position second only to the main planner, and determines the style and audience of the project. Therefore, an artist who aims at concept artists is regarded as an artist Wind Prison is incompetent. )

Reference materials: Official account Yisha Jun: The most complete in history! ConceptArt professional analysis Sharing by Said University Concept Art professional Yufei Zhang--2020-02-04 18: 00

Generative artificial intelligence tools are more useful in helping roles like game designers to explore game concepts and inspire inspiration. It's also a critical part of the production process, as one game studio, for example, is using the above tools to radically speed up their concept art development process, as they create an image in just one day, compared to Previously, this process took up to 3 weeks, but how exactly?

First, game designers use Midjourney to explore different inspirations and generate concept images as they see fit. The images are then handed off to professional concept artists who can combine the images and create a coherent image of a related theme, which can then be fed into Stable Diffusion to form a series of image variations.

2 D Production Art

Everyone will discuss these different image styles together, and then decide on one, edit it manually with a brush, and then continue to repeat the above process until everyone is satisfied with the result of the work. At this stage, the final upload of this image to Stable Diffusion creates the final artwork.

Other game studios are trying to use similar artificial intelligence tools to create in-game art. For example, the image below is from Albert Bozesan's tutorial on how to use Stable Diffusion to create 2D assets for games.

3D Artwork

Source: https://www.youtube.com/watch?v=blXnuyVgA_Y

3D Stereoscopic Modules, an important building block for all current modern games and the upcoming Metaverse. Virtual worlds and game levels are essentially a collection of 3D assets, through different combinations and placement methods, modifying different parameters to fill the game environment. However, creating 3D elements is more complex than creating 2D floor plans and involves multiple steps, including the need to make 3D models, add textures and effects. For animated characters, it also involves creating an internal "outline" and then animating on top of the outline.

We see different startups looking at opportunities at various stages of the 3D asset creation process, including model creation, character animation, and level production, among others. However, this part of the business and innovation is still being explored.

3D assets

Startups trying to create 3D models include Kaedim, Mirage and Hypothetic. Big companies are also paying attention, including Nvidia's Get3D and Autodesk's ClipForge. Kaedim and Get3d focus on image-to-3D model conversion; ClipForge and Mirage focus on text-to-3D conversion, and Hypothetic is interested in both text-to-3D search and image-to-3D.

Mirage: https://www.mirageml.com/

Kaedim Company: Headquartered in London, mainly generates 3D models from 2D images.

3D Textures

BariumAI:https://barium.ai/

Ponzu:https://www.ponzu.gg/

If in the game, the 3D model can be used on mesh-based textures or materials, it can appear more realistic. For example, using a different type of weathered stone with moss on a medieval castle model can completely change the look of a scene. Textures, as we speak here, contain metadata about how light reacts to materials (i.e. roughness, glossiness, etc.), artists can easily generate textures based on text or image cues, and are invaluable for increasing the speed of iteration in the creative process , companies like BariumAI, Ponzu, and ArmorLab are working in this space.

animation

The production of excellent animation is one of the most time-consuming, expensive and skillful parts of the game creation process. One of the ways to reduce costs and create more realistic animations is to use motion capture, which is to give actors or dancers Put on motion capture suits and use special equipment to record their movements.

We found that current generative AI can capture animation directly from video. This is more efficient because it eliminates the need for costly motion capture equipment and means we can capture animation from existing video.

Kinetix

DeepMotion

RADiCAL

Another exciting point of the artificial intelligence model is that it can be used to filter existing animations and add new special effects, such as making animated characters look drunk, or old, or happy with one click. Companies in this space include Kinetix, DeepMotion, RADiCAL, Move Ai, and Plask.

Level design and game world building (Level design & world building)

One of the most time-consuming aspects of game creation is building the game world, and generative artificial intelligence can be used for this task. Games like Minecraft, No Man's Sky, and Diablo are known for their procedurally generated levels, where the levels are randomly generated and different each time, but follow the rules set by the level designer. A big selling point of the new The new Unreal 5 game engine is its collection of procedural tools for open world design, such as leaf placement.

Companies such as Promethean, MLXAR or Meta's Builder Bot all see an opportunity in generative AI technologies. There has been academic research on this for some time, including generative techniques for Minecraft or level design for Doom.

Why Generative AI Tools Have Potential for Game Level Design? Because AI has the ability to create different styles of levels and game worlds. Imagine tooling to rapidly generate a game world of 1920's glamorous New York, or the design of the mysterious dystopian Blade Runner, or a Tolkienian (Lord of the Rings-like design and landscape) fantasy world ( vs dystopian blade-runner-esque future, vs. Tolkien-esque fantasy world.).

The following concepts are different styles of levels in the game generated by Midjourney using hints:

audio

Sound and soundtracks are an important part of the gaming experience. Companies are already using generative artificial intelligence to generate audio to complement graphics efforts.

sound effects

Sound effects are another attractive area for artificial intelligence. There have been academic papers exploring the idea of ​​using artificial intelligence to generate "foleys" in movies (such as footsteps), but there are few commercial products that can be directly applied in games.

The author believes that this is only a matter of time, as the interactive nature of games makes it an obvious application for generative artificial intelligence, as well as creating static sound effects as part of production ("laser gun sound effects in games, etc") , and can also create real-time interactive sound effects at runtime.

Imagine how to generate footsteps for player characters (author's note: such as footsteps in CS and eating chicken..)? Most traditional games solve this problem with a small number of pre-recorded footstep sounds: eg, walk on grass, walk on gravel, run on grass, run on gravel, etc. These sounds are cumbersome to publish and manage, and sound repetitive and unreal when run.

A better method is to generate suitable and more realistic sound effects through the simulated sound effects of the generative AI in real time, and express different effects through the parameters in the game, such as the ground, characters, weight, gait, footwear and other media. sound effects.

Music (game soundtrack)

A soundtrack is important to a game because it helps the theme of the story set the emotional tone, just like in film or television. But because games last longer, sometimes hundreds or even thousands of hours, constant music can quickly become repetitive or tiresome to players. In addition, due to the interactive nature of games, it is difficult for game soundtracks to fully accurately match the scenes and actions that occur randomly on the screen.

Adaptive music has been a topic of interest in game soundtracks for more than two decades, going all the way back to Microsoft's "DirectMusic" system for creating interactive music. However, DirectMusic has not been widely adopted, mainly because of the difficulty of composing music in this format, and only a few games, such as Monolith's No One Lives, have created a truly interactive soundtrack (Monolith's No One Lives Forever,).

Now, there are many startups trying to create AI-generated music, such as Soundful, Musico, Harmonai, Infinite Album, and Aiva, although many current tools, such as Open AI's Jukebox, are highly computationally intensive and cannot run in real time. , however, once the initial model is successfully built, real-time operation will be possible.

Dialogue & Voice (Speech and Dialog)

A lot of companies have tried to create realistic voices for characters in games, which of course is not uncommon due to the long history of computer speech synthesis, including Sonantic, Coqui, Replica Studios, Resemble.ai, Readspeaker.ai, and many more. There are many advantages to using generative artificial intelligence for speech. Of course, the competition in this track is also fierce.

Instant dialog generation. Typically, in-game voiceovers are pre-recorded by voice actors, but these are limited to deadpan speech scripts. With generative AI dialogue, characters can say anything, which means they can react adequately to player actions.

role play. Many gamers wish to play as virtual characters who bear little resemblance to their real-world identities. However, as soon as the player speaks with their own voice, this illusion is shattered, and using a generated voice that matches the player's avatar maintains the illusion.

Control sound effects. When speech is generated by AI, we can control the nuances of the voice, such as its intonation, inflection, emotional resonance, phoneme length, accent, and more. Localization (easy translation and foreign promotion). Conversations can be translated into any language and spoken with the same voice, and companies like Deepdub specialize in this niche.

NPCs & Player Characters

Many startups are researching the use of generative artificial intelligence to create interactive characters. In addition to the market opportunities for NPCs in games, virtual assistants or receptionists also have a lot of room for growth. This effort dates back to the early days of artificial intelligence research.

Many companies are building general-purpose chatbots, many of which are powered by language models similar to GPT-3. A handful of companies are specifically trying to build chatbots for entertainment purposes, such as Replika and Anima, which try to create virtual companions. The era of virtual girlfriends shown in the movie "Her" (a sci-fi romance film written and directed by Spike Jones, starring Joaquin Phoenix, Scarlett Johansson, etc.) may soon come.

Now we can see the next iteration of these chatbot platforms, such as Charisma.ai, Convai.com or Inworld.ai, which can render 3D characters powered by emotions, and tools that allow creators to give these characters Goals can have a narrative role in integrating into the game or in driving the plot forward, rather than being purely decorative.

unified platform

  • The most successful generative AI tools, like Runwayml.com, bring together a wide range of creator tools. However, there is currently no such company in the gaming space, and A16Z is eager to invest in a generative AI gaming solution that:

  • A full suite of generative AI tools covering the entire production process: (code, asset generation, textures, audio, etc.)

  • Generic scene suite tool designed to fit typical game production

Summarize

Summarize

Original link

Original link

a16z
Welcome to Join Odaily Official Community