A Whole New Game: NVIDIA Research Brings AI to Computer Graphics


The same GPUs that put games on your screen could soon be used to harness the power of AI to help game and film makers move faster, spend less and create richer experiences.

At SIGGRAPH 2017 this week, NVIDIA is showcasing research that makes it far easier to animate realistic human faces, simulate how light interacts with surfaces in a scene and render realistic images more quickly.

NVIDIA is combining our expertise in AI with our long history in computer graphics to advance 3D graphics for games, virtual reality, movies and product design.

Forward Facing

Game studios create animated faces by recording video of actors performing every line of dialogue for every character in a game. They use software to turn that video into a digital double of the actor, which later becomes the animated face.

Existing software requires artists to spend hundreds of hours revising these digital faces to more closely match the real actors. It’s tedious work for artists and costly for studios, and it’s hard to change once it’s done.

Reducing the amount of labor involved in creating facial animation would let game artists add more character dialogue and additional supporting characters, as well as give them the flexibility to quickly iterate on script changes.

Remedy Entertainment — best known for games like Quantum Break, Max Payne and Alan Wake — approached NVIDIA Research with an idea to help them produce realistic facial animation for digital doubles with less effort and at lower cost.

Using AI, researchers automated the task of converting live actor performances (left) to computer game virtual characters (right).

Artificially Intelligent Game Faces

Using Remedy’s vast store of animation data, NVIDIA GPUs, and deep learning, NVIDIA researchers Samuli Laine, Tero Karras, Timo Aila, and Jaakko Lehtinen trained a neural network to produce facial animations directly from actor videos.

Instead of having to perform labor-intensive data conversion and touch-up for hours of actor videos, NVIDIA’s solution requires only five minutes of training data. The trained network automatically generates all facial animation needed for an entire game from a simple video stream. NVIDIA’s AI solution produces animation that is more consistent and retains the same fidelity as existing methods.

The research team then pushed further, training a system to generate realistic facial animation using only audio. With this tool, game studios will be able to add more supporting game characters, create live animated avatars, and more easily produce games in multiple languages.