All The Secrets of Ryse’s Amazing Character Technology Revealed with Data, Artwork and Renders
At the Game Developers Conference CryTek Senior Character Artist Abdenour Bachir held a presentation titled “Play the Cut Scene: The Characters of Ryse: Son of Rome” in which he showcased the rather amazing character technology used for the game.
CryTek faced a few challenges while working on Ryse: to be more specific, it was a new IP, made by a new team on a new console as a launch title. It needed to match movies in visual quality, and they just had 18 months to complete it.
The art style wasn’t always realistic as the final version we saw on our consoles. Here are three different styles that were discarded:
New technologies were implemented in the game in the form of Physically Based Rendering, real time cloth simulation and a new hair technology.
Physically based rendering has the advantages that it’s more realistic, one material fits all lighting conditions, and has an unified rendering pipeline.
The hair technology is based on the preference of the artists working on it, and each character had its own settings. It relies heavily on Ornatrix and 3D Studio Max.
The cloth tech is a new system, powerful and easy to use and renders each mesh based on a physical simulation.
Models were initially created in high resolution, matching CG movie quality. They were split in parts with multiple subtools per part. There were up to 4 million polygons per subtool and easily over 130 million polygons per character. Armor was modeled the base sculpt modified with specific damage, general damage and surface noise. Textures were authored in 4K resolution, and had 4 channels (diffuse map, gloss map, specular map and normal map). Each character had up to 144k total texture resolution (you can find a visual comparison between the high resolution models and the in-game models here).
Physically based rendering-ready textures were prepared together with zbrush masks. Each material was split into layers and the masks were applied to each layer.
After that, models were optimized down to a polycount that could fit into a console game, with different level of detail budgets for each character ranging between 40,000 and 160,000 triangles. Most main characters have no additional LODs so they appear with the same detail regardless of distance. The process took up to a week per character.
In-game textures are still authored in 4k. Each part has three different textures: Diffuse, Specular and DDNA, with Gloss and Normal maps coupled.
Three different shaders were used for characters: the ILLUM Shader was implemented for general use. It’s very versatile and has an option for subsurface scattering (a feature that simulates the translucency of a material). The HUMANSKIN shader was used for high fidelity skin. It features a very realistic subsurface scattering and can have a translucency map. Eyes had their own shader including lens distortion, iris self-shadowing and subsurface scattering approximation.
Faces were first 3D scanned…
Then they underwent a process of cleaning up and characterization.
A sample of skin tones was prepared adding a degree of color variation, specific details, skin weathering and finally pores and wrinkles.
An atlas map (basically a bigger texture including multiple smaller ones for different parts) was created for textures, with six areas per channel and up to 24 areas. Each texture could have unlimited wrinkle maps based on multiple scanned poses.
Creating characters had a few relevant pillars: there were no cutscene-specific rigs, a huge scope (14 story characters and 120 minutes of cutscenes), everything needed to feel hand-crafted and actors were cast for voice acting and performance capture for both face and body.
This also created challenges, represented by the universal rigs for gameplay and cutscenes, by the fact that it was the first project made in Maya for the team and by the high number of hero characters.
On Xbox One Marius ended up with 85,000 triangles, 815 joints of which 585 deforming (260 only for the face), and 250 corrective facial blendshapes (basically different expressions that can be switched sequentially or used in conjunction with each other to create animations).
Facial modeling was reliant on the scan, but artists deviated in some cases by up to 20%. The 260 joints are reduced to 70 and 10 depending on distance due to the LOD, and 85 blendshapes of the 250 can be used at any given time. The engine computes up to 200,000 vertex deltas (the direction and speed of movement of each vertex) per frame just for faces.
If you want to check out the full slides of the presentation, you can download them here. You can also see a full gallery with all the work-in progress pictures below (and yes, the last two pictures are just funny glitches, developers like to show those).