At the Game Developers Conference in San Francisco, Ubisoft hosted a panel on an innovative animation technique named "Motion Matching." The panel, that DualShockers attended, described the experimental process, which according to Ubisoft Toronto Animation Director Kristjan Zadziuk could bring to a significant shift in the field, comparable to the introduction of motion capture.

The panel was aptly titled: "Motion Matching and The Road to Next-Gen Animation."

Unfortunately, the panel included a lot of videos, but filming wasn't allowed, so we took a lot of pictures to give you a limited idea of what was showcased. Below you can also find a full recap of the talk, and all the slides that were shown.

  • After Splinter Cell: Blacklist Zadziuk worked on "a couple of unannounced titles" at Ubisoft Toronto, including some "cool tech."
  • This presentation featured prototype tech, and due to its potential, Ubisoft is "keen" on investing in it going forward. The tech is being considered for future games, but the assets you'll see below do not represent any game currently in development at the company.
  • The concept of Motion Matching has been around for a while in theory, but only this generation of consoles has made it possible for practical uses.
  • The current process involves doing motion capture, capturing a lot of individual cycles, making sure that as many actions as needed are captured. After that, animators will painstakingly cut them into clips, polish them, create the loops and more. Then the animations get implemented in the game.
  • The goal of motion matching is "going from A to B with as little fuss as possible," allowing animators to focus on what the actions really are without having to worry abiout losing fidelity.
  • With the current animation process a sudden change of direction would often prompt animator to use a transitional animation to replicate the change in balance, losing responsiveness and probably fidelity.
  • The team wanted to improve in realism, control, simplification and variety.
  • An oversimplified explanation of motion matching is the description of a small amount of criteria determining what the character has to do over a set time (for instance one second), then factors like position and velocity, past and present trajectory, positions of joints like feet and hands and their velocity, as well as any tags that might be present are taken in account. That results in finding an appropriate matching section in an unstructured library of poses, matching the required pose at the time of input.
  • For the first test, Zadziuk got into a motion capture suit and ran around the studio doing all sorts of actions. That included 10-15 minutes worth of untouched data. All the data was in one file. This allowed to fully recreate weight shifting, quick changes of direction. While moving on a circle, if the player took his finger off the button, the character would decelerate naturally. The team felt that the test was really successful.
  • The test brought forth some fears and a lot of assumptions. How would animators work with it? Was it possible to use it only with motion capture? How much data was needed?
  • The initial test seemed to indicate that all was needed was plugging in the data and leave it at that.
  • The first step was to create a routine that was dubbed "Dance Cards," to capture data more efficiently. The Dance cards indicated all the actions that were needed for the capture, and would be repeated for walk, jog and run.
  • It could be really tiring to do everything in one shot, as it could last up to four minutes, the actor would need to remember a lot of moves, and would need to be extremely fit. That's why the Dance Cards were split in smaller phases to make them more manageable, and broken into starts and stops, turns, accelerations, decelerations, transitions between speeds, and snaking, that was of key importance for Zadziuk .
  • The goal of the Dance Cards was to find a way to capture the minimum amount of moves possible, in order to create the maximum amount of coverage of a basic locomotion set.
  • After the data was captured, it was imported directly into the engine without any touch-up just to see what happened.
  • While there were no obstacles, and that made it easier, the team created a better looking and feeling locomotion system in four hours than what would take the best team at Ubisoft at least three months to make. That was "awesome and terrifying."
  • In order to move forward, it was crucial to learn how to manipulate the data. This brought forth some frequently asked questions by animators about motion matching.
    • How can animator control responsiveness versus quality?
    • Can this be integrated in existing systems?
    • How can animators work with the system?
  • The team tried different kinds of locomotion, still based on the human model, like the "Ape" Nav. They thought of motion capturing it, but it was so complicated that they actually decided to keyframe it (creating the animations manually). This provided valuable insight.
  • Motion Shaders were created as a node graph system to give animators and designers more control on how the moves would be seen. This allows them to set the percentages of importance of elements like position, velocity, hands and feet, changing the look and feel of the locomotion, as the available data was used differently.
  • This hybrid system also allowed to determine whether respecting the position of the animation or user input had priority.
  • The system used motion matching as an animation node. It was great for transitions and could be used as a replacement for the entire movement system. The team only scratched the surface on what can be done with this.
  • To test further, the team created off-balanced and stumbling movements that still retained full player control.
  • Locomotion that involved impacts from all kinds of directions and form all different types of force was also tested. The result was the seamless transition from locomotion to impacted locomotion actually based on physical impulses.
  • Using this system will require a shift in mentality, as it's probably the biggest transition since the introduction of motion capture.
  • Keyframing can actually be used within the structure.
  • You can simply plug in the data and receive a finished result, but the more data you use, the more the result will be precise.
  • The system performs better if you feed it the right data, instead of all the data.
  • The team achieved the following successes:
    • Minimum setup is required.
    • The motion is high quality and biomechanically correct, retaining full player control.
    • Dance cards were successful in capturing all the moves needed in the smallest amount of time and space.
    • It was super-simple to add a wide variety of locomotion.
  • There were also problems and areas that can be improved upon:
    • Editing data can be tough. The system brings you to 70% of the work really quickly, but the problem is the last 30%.
    • The system is very heavy on the data, with a lot of it going wasted.
    • For now the system is restricted only to human-like rigs, and all animation is full-body.
    • It was difficult to make designers and animators work together on the data, and altering the data could result in undesirable results, like breaking the animation altogether.
    • Mirroring did not work.
    • For the moment the focus is only on locomotion.
  • In the future, the team has the following goals:
    • Exploring uses beyond locomotion. They touched on traversal, cover, various types of AI and jumping, but there's a lot more work to do in these fields.
    • The hybrid system needs to be improved.
    • The Dance Cards need to be improved, splitting files to be more readable and clean. The team has also looked at improving the routine to get better data from that to begin with.
    • In order to improve the results, the system needs to be integrated with other tech currently in development at Ubisoft.

If you’re a developer or animator who didn't have a chance to attend GDC, or this specific presentation, and you’re interested in the full audio recording, we’re happy to share. Just contact giuseppe@dualshockers.com, and we'll send it your way.

[On-location reporting: Steven Santana]