Integrating Real-Time Rendering into Film Production Pipeline

Han Yang talked about the production of his personal film The Pantheon and the way he worked with real-time technology, mocap, and camera.

Introduction

My name Han Yang and I’m currently working at Method Studios as CG Supervisor.  On the side, I’m a 3D generalist and indie director. It’s been almost 7 years since I started in the VFX industry and my first job was a 3D animator. I always wanted to do more than animation and even create my own film, so I never stopped making my personal work outside animation. For the most part of my career, I worked as an animator or layout artist. During the years I had the chance to work on feature films like Logan, StarTrek: Discovery, Aquaman and Pikachu. It was a great experience to be involved in VFX production and I learned a lot across all disciplines. Later on, that definitely helped me with making my own film as I knew details about every step of the production.

Career

I actually worked at Method previously and then left it for some time to have a chance to explore some other opportunities, but mostly in the same field of animation or layout. I didn’t stop making my personal film The​ Lander, however, since​ I have always been a sci-fi fan and wanted to explore cinematic storytelling. It took about a year and a half to finish. Fortunately, after I released The​ Lander,​ Method noticed the unique workflow I used integrating Unreal Rendering into CG production.

They were building a team targeting game-related content and trying to integrate Unreal into the workflow, too. At that time, I was also trying to expand my involvement in the production creatively so all stars aligned. They brought me on board and gave me a lot more creative freedom. Right now our team is working on some really exciting projects for games which I hopefully can show you guys soon!

Real-Time Rendering in VFX for Films

The VFX in films right now is still a very time-consuming and labor-heavy work. Plus the layers of approval and number of iterations for a shot to get through make me feel there’s very little ownership to the work I did. When Unreal was made free, all of a sudden there was this amazing tool available for every individual artist to render beautiful imagery without a need in a render farm. That’s when I realized that I could make a 60-70 shots film with all the resources I already had which almost freed me from the VFX offline workflow. Still, I can use most of those techniques like camera, animation, and compositing and merge them with real-time rendering to reduce the render time to none. It’s just like the best of two sides come together.

When Unity released the film ​Adam ​made by Oatis studio, I realized that people were already exploring real-time filmmaking and it looked absolutely stunning. That’s when I started to shift my workflow leaning more towards real-time and I’m really glad to see that so many more artists picked up Unreal and the community expands so rapidly!

With the virtual production being heavily featured in Mandalorian, I believe real-time rendering will seek its way into film and visual effects production faster than we think. We can already see the difference in render quality between UE4 and offline rendering. I think it’s just a matter of time before VFX will adopt real-time as an essential part of the workflow. Currently, it’s mostly used for previsualization, on-set production, but as the ray-tracing technology gets better, and the hardware gets more powerful, I don’t see why UE4 or real-time rendering, in general, can’t gain a foothold in VFX. I’m really looking forward to seeing the evolution of the industry and how the line between video games and film production will become more blurry.

About The Pantheon

The story for The Pantheon is pretty simple: during the age of Black Magic, stories were told that a space crystal is a key to unlock a portal between two worlds. Whoever acquires the key can unlock the portal at a lost temple called The Pantheon. The story begins with the battle between the Archer and the Orc.

The project Pantheon was started right after The Lander back in 2019. I wasn’t able to play around with a lot of complex character animations in The Lander so I wanted to push it further. Also after making a sci-fi movie, I noticed that most actions you can do in such projects are just gunshots, which is less interesting compared to close hand-to-hand combat. Therefore, the project Pantheon aimed at featuring close combat animation, facial animation, and UE4 rendering with higher fidelity. That’s why I landed on a sword fight and a fantasy theme - the story was simpler, but I was able to explore more action sequences and dynamic camera movement.

During the production, I always try to keep the energy of the film really high and impactful. That means fast pacing, quick cut but not too jarring. I also think the sound design and music really helped to bring that energy out. The sound team did an incredible job and the film score is just right on point.

1 of 4

Software used:

  • Maya for animation, layout, and CFX
  • 90% of the film was fully rendered in Unreal Engine
  • V-Ray was used for close-ups and composited with Unreal rendered background. Xsens and iPhone for the body and facial motion capture
  • Nuke for compositing

Visual Quality vs Time Budget

To find a balance between visual fidelity and time budget is really challenging throughout the project. Unreal is great at handling rendering for things like environments, atmosphere. But when it comes to skin or character it’s still pretty challenging to get high-fidelity close-ups. It's still tricky to get a proper and accurate light/shadow interaction between objects in Unreal, even with ray-tracing turned on. So I had to plan my shots really carefully and smart. For close-ups, I used UE4 to render the background plate and matched lighting in V-Ray for the foreground. By doing so I could reduce the render time and keep the details and nice lighting on the faces of the characters.

Character Animation

For the character animation, I used Xsens for body mocap - most of the animations came from that first and then were heavily edited later. It was especially true for the fighting sequences. Since I only have one suit, I had to perform the actions twice and then try to match the timing for every punch. Also, I did all the mocap in my living room, so there’s literally no space for running. I had to stitch together multiple clips to make a proper performance.

Unreal has a very specific way of handling character rigs, so I had to cache the face and cloth as alembic for it to run in Unreal. I would say that’s the most challenging part of the workflow so hopefully Epic can come up with an alembic streaming solution in future updates.

Facial animation was animated on top of a basic mocap pass from iPhone. It’s a pretty straightforward process, iPhone comes with a depth camera that can capture facial performance and turn it into FBX through the Face Cap app. It gave me a really good base layer for timing and I just needed to animate on top of that. The character model was purchased from TurboSquid and heavily modified for the story purpose. My friend David Jiang who is an awesome character TD helped me with some CFX and rigging setup.

1 of 2

Basically, I have 2 sets of rigs for every character, one is for animating and the other for character FX. When the animation is done, the rig will be replaced with CFX one for cloth simulation. We feed the animated mesh into the simulation so we can direct the movement and key poses that we want to hit for the cloth.

Working with Camera

Most of the work I did for feature films involved animation and camera, so I’m always pretty comfortable with animating and framing with CG cameras. On top of that, I heavily used the iPhone to UE and Maya Virtual Camera to shoot most of the handheld shots. I made a previs before starting the production and it was all shot in Unreal using Vcam. I feel the most challenging part is to find a sweet spot for the amount of shake, pacing of action and the edit cut. I tried to keep the flow of the action smooth and make it easy for the audience to follow it, and also add a bit of shakiness to increase the tension.

1 of 3

One of the trickiest parts is to find the perspective from which the story is being told. I wanted to give both characters an equal amount of screen time, but also tried to make the audience root for one of them. Halfway through the film, I tried to shift the perspective from the girl to the Orc by slowly changing how each of them was being framed. It’s really hard to articulate though, it’s more in the flow.

For action sequences, I would always shoot all the animation and performance first and time them well before I jump to shooting cameras. When I shoot, I would do 5-6 different angles for wide, mid, and close-up coverage. Then, I put them into edit mode to find the best flow. All the camera animation was done in Maya where it’s easier to edit the keys after shooting to find the best framing. Then, I import it into Sequencer to sync with character animation.

Challenges

The overall production took about 10 months. In the last 2 months, I mostly focused on music and sounds. There were many challenging aspects in the production, but aside from technical ones, I feel that preserving the motivation and the drive to keep going is the toughest part. I have a day job so the time I had to work on the film was really limited. Having to work extra 5-6 hours after your job every day and keep pumping out 2-3 shots a week for 10 months can become really exhausting at some point. I’m really proud of the final product and glad that I didn’t give up halfway through (not saying I didn’t think about it though).

There were also a lot of creative challenges, too, from the flow of the story and action choreography to the color palette and editing. There were times when I was super insecure about any creative decisions I made and kept feeling there was a better solution somewhere. In the end, I figured out that the only thing you can do in those situations is to trust yourself and your vision and move on. Feedback from other artists is great for sure, but at the end of the day it’s your work and it's you who puts your heart into making it come to life. So it's really important to take feedback selectively and trust your own vision. As an artist, it’s really important for me to get better not only in my craft but also in being confident about the creative decisions I make.

As for the most interesting part of the production, I feel the sounds and music were most enjoyable. I worked on the film for about 9 months before the sound and music production started, so by that time, to be honest, I got sick of watching those shots over and over in silence. When the composer gave me back the first pass of the music, it instantly changed it into a new film! I had a totally different perspective watching it and it felt so fresh. Working with the sound department was also really fun. I had a couple of sessions with them and it was just amazing to see how much the sound design can bring life into the film. I’m really grateful for all the work those talented artists did. That was a lot of fun.

Pantheon Credits

  • Directed by Han Yang
  • Music: Andrea Bellucci
  • Sound VFX: Francois Huber, Maria Camarena, Chengshuo Hou, Ziting Zhao
  • Special Thanks: David Jiang, Mengyu Zhang

Han Yang, CG Supervisor at Method Studios

Interview conducted by Kirill Tokarev

Keep reading

You may find this article interesting

Join discussion

Comments 0

    You might also like

    We need your consent

    We use cookies on this website to make your browsing experience better. By using the site you agree to our use of cookies.Learn more

    Integrating Real-Time Rendering into Film Production Pipeline