The Quarry Used the Facial Animation Tool Made for Thanos

Performance-capture acting for the game felt like filming a stage play.

Whatever you think about The Quarry, you can't deny that its motion capture is really impressive. To make it look so good, Supermassive Games partnered with Digital Domain, a VFX studio that produced visual effects for Titanic, The Curious Case of Benjamin Button, and several Marvel films. Its creative director and visual effects supervisor Aruna Inversin revealed the details behind the game's mocap.

With The Quarry, the process of performance-capture acting felt more like filming a stage play, Inversin told The Washington Post. To create the photorealistic characters, Digital Domain used the AI facial capture system Masquerade, which was developed to replicate Josh Brolin’s likeness for his character Thanos in Avengers: Infinity War. For The Quarry, the studio needed something that could track the movement and facial expressions of actors and create digital characters that could be edited in real-time, so they created Masquerade 2.0.

Actors performed their scenes, Digital Domain uploaded the performances onto a computer with their body performance, head performance, and audio time-coded and synced, then send it all to Supermassive to review in the game engine. 

According to Digital Domain, it rendered 250 million frames for The Quarry, which is more than it would have for a typical film.

The VFX team transposed the actors’ performances onto their in-game character models via a multistep process Inversin called the "DD pipeline." First, the team performed facial scans of each cast member, then they filmed their parts at Digital Domain’s performance capture stage in Los Angeles. The actors wore full motion-capture suits and facial rigs to record their expressions; their faces were covered in dots, which the cameras tracked along with the markers on their suits to triangulate and map their movements to a virtual skeleton using Masquerade 2.0.

The team constructed molds of each actor’s face, put the dots on the mold, and drilled holes in it to create a physical template they then used to ensure the dots were placed in similar locations throughout shooting. If some of the markers were missing, Masquerade’s AI could automatically fill in the blanks.

Artists then analyzed the footage and identified the actor’s expressions to note how the dots moved in those instances. The data recorded throughout these shootings, facial scans, and range-of-motion tests were used to build a library of each actor’s idiosyncrasies that could then be used to train their AI so the computer could read their movements and expressions correctly. 

Such approach made the game look more realistic and 'alive.' 

"That’s the performance that Ted Raimi did and you see it in the game and that’s his lip quivering and that’s his look around, you know, that’s him," Inversin said. "An animator didn’t go in and fix that. You know, that’s what he did onstage."

To make it possible, Digital Domain used helmet stabilization and eye tracking. The team modified the eye-tracking software Gaze ML to improve the accuracy and appearance of the digital eyes. New machine learning algorithms added to Masquerade 2.0 allowed it to analyze the captured footage and compensate for any jostling as the actors moved around.

Video games need to render those animations in real-time in response to the player’s actions, so Digital Domain used Chatterbox, a tool developed by the company’s Digital Human Group to streamline the process of analyzing and rendering live-action facial expressions. The facial expressions and keyframes were fed into Chatterbox, which used machine learning algorithms to automatically track the dots on the actors’ faces and calculate the best possible options to alter facial expressions.

After 42 days of shooting on the motion capture stage, Digital Domain had 32 hours of captured footage to put into the game. Of the 4,500 shots deemed game-quality, only about 27 had to be tweaked by animators in post-production, which is a huge advancement that helped everyone involved.

"To have the ability for actors and directors to capture stuff on the mocap stage and know that their performances are being translated effectively in a video game with the nuance of their face and their performance, I think it’s a big selling point for everyone that wants to see those experiences as well as direct and consume that media," shared Inversin.

Learn more about the production here and don't forget to join our new Reddit pageour new Telegram channel, follow us on Instagram and Twitter, where we are sharing breakdowns, the latest news, awesome artworks, and more.

Join discussion

Comments 0

    You might also like

    We need your consent

    We use cookies on this website to make your browsing experience better. By using the site you agree to our use of cookies.Learn more