Creating Special Effects for Free Guy

David Cunningham, a CG Supervisor at Digital Domain, talked about the studio's work on Free Guy comedy, told us about the tools top filmmakers use, and discussed the way VFX studios collaborate with directions when working on films.

First Steps How and when did you start working on the world of Free Guy? What were your first steps? How did you define the initial vision?

David Cunningham, CG Supervisor at Digital Domain: Digital Domain began work on Free Guy in July 2019, but we were connected to the project before that. Our VFX Supervisor Nikos Kalaitzidis, and DFX Supervisor Scott Edelstein, were both on set at certain points, so we were involved to some degree early on in the filming process. For me, I had just started working at Digital Domain, so I was still familiarizing myself with Digital Domain’s pipeline, then after reading the script, I began studying the early edits and previz. 

As the scope of Digital Domain’s work started to become clear, I began assessing crewing needs with the facility, along with any potential software and unique workflow requirements the project would need. Our work broke down into two sections: the photorealistic work, and the gameplay shots. Our approach to the photorealistic work was fairly obvious – our goal was to make the CG assets and the environments look real, which is something DD does exceptionally well. For the gameplay shots, we researched different looks and aesthetic techniques to help the filmmakers nail down the visual style that matched their vision for the film. This meant compiling references of various video games and their rendered worlds, and working out what would be both recognizable as a video game, but also original and different to allow Free City to feel unique.

The City Let’s discuss the production of the film’s city? How did you work on it? What were your main sources of inspiration? How did you plan the city?

David Cunningham: The filmmakers chose the city of Boston as the foundation for the fictional Free City, so that became both our inspiration and our 1:1 reference. We used reference photos, tiled images, and all the usual data gathered on set by our incredible integration team. We also had several LiDAR scans that we used to base our models on. Our modeling and layout teams then worked together to bring these disparate buildings and roads together into an environment that would cover the area required by our shots. We also had a drone and aerial footage at our disposal, which helped us map out things like the path of the character “BadAss” in the opening shot. 

One of the more interesting challenges that we had to account for was the effect of the weather in Boston, which can be quite harsh in the winter. Our texture and lookdev teams had their hands full, ensuring that we hit the exact look of the buildings that we wanted, matching the particular aging and grime that had accumulated on walls, sidewalks, and roads. It was vital that we got these right, as so much of our work required transitioning from plate to CG seamlessly.

Throughout the entire project, we remained as true to Boston as possible in our Free City build. Some liberties were taken when necessary to meet the filmmakers’ creative brief, however, which allowed us to break reality to make for more interesting shots.

Tools What tools did you use during the development? Could you tell our readers about today’s top solutions that help filmmakers?

David Cunningham: We have several tools available in our pipeline, and we choose whatever works best for that specific project. For Free Guy, Maya was our standard for modeling, rigging, and animation, and we used it along with V-Ray for most of our lighting. We also used Redshift and Mantra – Redshift is a very fast GPU renderer that allowed for quicker turnaround on many shots, while Mantra was good for rendering volumes that were handled by our FX department and handed straight to comp. We have also begun to integrate a Solaris/USD lighting pipeline, and we’re excited to see what new options that create. Our simulation teams used Houdini for FX and character FX, as well as Vellum and the Carbon plugin in Houdini. 

Our Groom department used Digital Domain’s proprietary “Samson” software, which allows us to render hair as a render-time procedural, making character scenes lighter and more manageable. For some of the Free City gameplay shots, we were able to take advantage of SideFX Labs' Open Street Map importer to help our postviz team get an accurate base layout of downtown Boston. 

Nuke, which was originally developed by Digital Domain, is our standard package for all compositing and 2D-related work, including deep compositing, which was utilized heavily on Free Guy.

Creating Digital Doubles You had to create a number of digital doubles for the film, right? What was the process here? How did you create a digital version of Guy himself?

David Cunningham: We created hero digidoubles for all the main characters, including Guy, Millie, BadAss, Dude, Keys, Mouser (in rabbit costume), and Buddy. We also created a few extra digidoubles on top of that for extraneous background actors and a whopping 46 unique characters for the gameplay side of things.

For Guy, Ryan Reynolds spent some time at Digital Domain’s facilities, where we began with an ICT scan of his face. This gave us polarised and cross-polarised reference images, with the unique firing of each light from several angles. That also supplied us with unlit Diffuse, Specular, Displace, and Normal Maps which we use as a starting point. Our texture team, led by Nick Cosmi, processed those Maps internally and extracted other high-resolution details from different LODs of the scanned model of Ryan. 

Our texturing and modeling teams frequently work together to ensure details like those we recorded from Ryan’s facial scan are applied to Digital Domain’s “Genman” human topology. The supplied displacement is used as a finer bump pass, and the height difference between the highest scan mesh and our internal hi-res sculpt is extracted for use as the main displacement map. All of these maps allowed us to dial in exactly how our model matches Ryan’s skin, down to the fine skin pore and surface details that are unique to him. When building our shaders, the light stage reference images also allow us to fine-tune the surface properties of the skin to match the high fidelity references. Those include values such as the depth of subsurface scattering, and the specular values of the sheen on human skin – very difficult things to get right, but handled expertly by our Lookdev Lead Brent Elliott and his team.

Along with the facial scans, we also used a head-mounted camera to capture marker footage of Ryan delivering his lines, and going through a series of facial poses and mouth shapes for the most common human phonemes. Using our proprietary retargeting software “Bullseye,” we were then able to transfer his recorded shape poses onto our clean topology. Several other facial poses unique to Ryan were then sculpted by hand and added to our facial rigs.

Charlatan Could you discuss Digital Domain’s proprietary face-swapping tool, Charlatan? How does it work? What was it used for?

David Cunningham: Charlatan is a proprietary tool, created internally by Digital Domain, that allows us to leverage machine learning neural networks to help replace one performer's face with another. It analyzes source footage – reference footage of the original performer and replacement performer – and creates millions of inference calculations to try to resolve the inconsistencies caused by replacing one face with another. 

To make this happen, we need a significant amount of data, including footage of both performers delivering the same lines, similar lighting scenarios, matching camera lenses and focal length, the delivery by one (or both) performer of training dialogue created to help analyze mouth shapes, and more. The more data we have and the closer the two sources are, the better. To get the absolute best results, an extremely high fidelity digidouble also goes a long way to giving accurate training data for the Charlatan algorithm.

In Free Guy, there was one shot where the character of BadAss played by Channing Tatum needed to deliver a pivotal line within the gameplay world. After completing the work, the filmmakers decided that the dialogue needed to be significantly changed. Unfortunately, it wasn’t practical or logistically possible for Tatum to return to re-shoot a single new line, so the filmmakers asked us what we could do.

We first attempted to alter the character using traditional animation techniques, but the results just weren’t syncing with the rest of the footage. We needed more realism, while still keeping the scene anchored in the gameplay. Charlatan was still very new at the time, but we decided to give it a try. We rendered both the animated Channing digidouble and the gameplay version of BadAss delivering the new dialogue, and then fed that into Charlatan, along with the plate footage of Tatum reciting the original lines. We also include a fair amount of still and live-action references of Tatum (and various other data), and Charlatan created an acceptable result. From there, our talented comp team carefully married the new Charlatan face with the rendered gameplay BadAss. Tatum later recorded the new dialogue, and the result was a seamless replacement.

Collaboration I have always been wondering about the way VFX studios collaborate with directions when working on films. How did Shawn Levy help you define the right direction? How did you collect the needed feedback?

David Cunningham: In visual effects, we have cinesync meetings with the filmmakers, which often includes the director. Kind of like Zoom, these meetings allow us to review work simultaneously, get feedback and accurately interpret their creative vision. Written feedback is great, but nothing beats looking at the same images at the same time. Free Guy’s client-side VFX supervisor (and DD alumnus) Swen Gillberg was the main point of contact for Digital Domain, and he was in regular communication with Shawn. After Swen and Shawn would meet and review our work, Swen would relay those creative notes.

It was really important to Shawn to nail the style of the gameplay, so he frequently sent over visual references. Sometimes Shawn would send videos; other times he’d collate cool imagery he was inspired by. We would then implement those changes and try some new things, all the while having an open back-and-forth communication with him until we found the right look. 

Challenges What were your main challenges during production? Was there a particular sequence that made your team think for days?

David Cunningham: The opening shot of the film (the “BadAss Oner”) and succeeding in offering the best look for the gameplay were two of the most important things we did, and also some of the most complicated. The Oner was actually ten shots stitched into one seamless, enormous, 2,600 frame beast. Along the way, a small cutaway of the character BadAss was added, which split up the shot. To blend it all together, our CG Supervisor Attila Szalma, Comp Supervisor Viv Jim, and Associate DFX Supervisor Eric Kasanowski did an amazing job of wrangling CG, plate, projections, panoramas, tiles, effects, and more, then stitching it into one shot. Special mention must go to Khari Anthony and Chun Ping Chao, who brought the entire shot together in comp.

The challenge of the gameplay was in making the digital avatars look very much like the actors we know, but not so close that we hit the uncanny valley as the film jumps from live-action to gameplay and back. We went through many different forms and stylized versions with Shawn Levy and his team before arriving on the gameplay look you see in the film. We referenced titles like Apex Legends, Grand Theft Auto, Fortnite, and Overwatch, with each having some influence on our final result. The recurrent theme though was GTA, both in looks and in the overall vibe of the game world the characters inhabit. In the end, we achieved the look by taking our digidoubles and dialing back the detail in the models, textures, and lookdev. We smoothed out features and adjusted proportions slightly, and they were also rendered with a small shutter angle (usually 90 and sometimes 45-degrees) to give that slightly jerky video game motion.

The construction site sequence was also challenging. There’s just a lot going on there. There were many moving parts, both figuratively and literally, and establishing a consistent and acceptable movement for the characters as they bounded up steps and leaped between floors while the landscape kept moving required a lot of attention. Ultimately, we needed to blend video game physics with photorealism, which is a thin line to tread. The sheer number of refractive elements in that world was also huge, and with the scene almost entirely lit with indirect light, getting render times down was a challenge. 

Bob White supervised that sequence and did an incredible job of pulling it all together, with Lighting Lead Joseph Hayden utilizing de-noising techniques both in V-Ray and Nuke to help get render times into a more manageable realm. Our Lookdev Lead Brent Elliott was also instrumental in making sure our assets were as optimized as possible.

Pandemic How did the pandemic affect production?

David Cunningham: When quarantine hit, there wasn’t as long an interruption as we initially anticipated, thanks to a number of things all going right. Digital Domain’s management and systems teams completed the Herculean task of transitioning its entire staff to working from home – and not just the offices in the U.S. and Canada where the bulk of the artists are, but all nine offices around the world. 

For Free Guy, we were already deep into the work when the lockdown began, so we had already established a strong relationship internally and with the filmmakers – everyone already knew each other and had a rapport. Our daily schedule remained roughly the same; we were just communicating remotely. It was a little bit of a different process for projects that began after lockdown, as people needed to meet and interact entirely remotely, but everyone has adapted quickly and the quality of the work has been as strong as ever. 

Overall, even with the shift to working from home, Free Guy was a very fun project to work on. It allowed the teams at Digital Domain to really stretch their legs creatively, and do things we had never done before. On top of that, it’s always nice to work on a film that is so much fun to watch!

David Cunningham, CG Supervisor at Digital Domain

Interview conducted by Arti Sergeev

Join discussion

Comments 0

    You might also like

    We need your consent

    We use cookies on this website to make your browsing experience better. By using the site you agree to our use of cookies.Learn more