Analyzing Cinematics Industry: Emerging Technologies & Workflows

Erasmus Brosdau, CEO and Director at Black Amber Digital, shared his experience of directing and producing high-quality realistic cinematics and reviewed various technologies used in production: Motion Capture, Real-Time and Offline rendering, Deep Learning systems.

A Look inside Industry: Director's Point of View

Cinematography and animation are very closely connected. I, as a director, need to know all the poses and animations of my characters, objects in the scene, etc. in order to choose the best framing depending on the direction I want to go for. Many years ago, we all had to work more or less blindly as we didn’t have a chance to see a really good representation of a final picture, especially when doing direction on a Motion Capture shoot. So while the animation tools themselves have been more or less similar, the way we can use them in other software nowadays has improved quite a lot, especially in real-time engines like Unreal Engine 4.

During Motion Capture shoots, I usually show the actors a pretty decent-looking pre-visualization in order to give them an idea of what I’m looking for. Often, I use simple animation tools like Mixamo to build a quick preview of the entire scene. This way, I can more or less kit-bash my animations and get a feel for the entire pacing, dynamics, and cinematography. It allows me to see before the Motion Capture how my ideas for the cinematography will work out or if I need to add more cuts, change camera angles, etc.

During other Motion Capture shoots, we even bring 3D characters to the set for the preview and allow the actors to see their 3D avatars themselves on the big screens. They see themselves inside a 3D environment with decent lighting and it’s just much easier to communicate the direction and the atmosphere of the scene that way.

During the last years, the entire progress for animations and real-time rendering has been led towards the goal of being able to constantly see your final picture. The more I’m able to see the animations being adjusted in real-time, may it be with Motion Capture or tools like Live Link, the more I can adapt my cinematography for the best framing. The entire team is also able to prevent any later issues with final characters sticking out of the frame or other problems. 

What Technology to Choose?

When talking to different people in the CG industry like artists, managers, producers, or investors, I see that many are keen on using the latest technology for their next project. Everyone agrees it would be great to be the first one to use feature X or technology Y in production just for the sake of being pioneers in those fields. While that is certainly nice for the team involved, the audience usually doesn’t care too much about it.

When you make a CG movie, you have to captivate the audience with interesting stories and characters, - and that's not usually interesting whether it has been rendered in real-time or with Arnold. It’s the same principle when a movie with amazing CGI can still be a very bad movie - it doesn’t help to make a weak plot better trying to distract with visual eye candy. All of the VFX are there just to help narrate the story and depending on the script and direction, these tasks can indeed become super complex very quickly. But once they are added to the movie, everything can come together to an amazing experience for the audience.

Especially in times like today, when the demand for content is incredibly high for various streaming platforms, every studio needs to figure out the best pipeline in order to tackle the immense workload of CG tasks. There is not really a right or wrong, every project may require something different. In this regard, it’s great to see that every year many software providers are delivering new tools that can speed up production massively. Smaller companies especially have produced an incredible number of creative new tools and helpful features, while the big ones tend to just increase prices and introduce subscription-based costs for existing tools. This has led to a shift for companies in deciding which apps they are going to use for future projects and in which apps they invest their research.

Blender and Unreal Engine are of course one of the main tools that have made their way to the top. Unreal Engine constantly shines with brand new tools and new technology like Raytracing. Blender as well came up with a new real-time render engine and more sophisticated tools for 3D production. While not all tools can be instantly used in production, it’s great for studios to test them and to see the potential for what’s to come in the next few years. 

Facial Animation

Facial animation is undoubtedly one of the most complex and difficult animation tasks in the industry. If you're looking for photoreal animations, throughout the years, the best way to achieve that has been to capture an actor's performance and to transfer it onto the CG character facial rig. For cartoon productions, keyframe animations are usually still the way to go.

My projects are all photoreal so I’m always looking for the best quality. I made my own facial rigs back then but luckily I now work together with a highly skilled facial animation artist, who takes over the blendshape modeling and facial rigging. We then take the GoPro recording of the actors and process them in Dynamixyz, a software solution that specializes in these tasks. This gives us a great starting point for an automated facial animation which then usually gets fine-tuned manually by hand to get the best out of it.

All our projects are rendered in real-time, so the facial rigging is slightly less complex as in offline rendered productions since our setup needs to run at 60 frames per second. In Unreal Engine, we have a very complex skin material for the face that automatically blends in additional normal maps for wrinkles on the face once the specified blendshape gets triggered. Of course, all our characters have exactly the same topology and UV layout so that we can reuse as much as possible for additional work on characters.

Motion Capture: Advantages and Disadvantages

Motion Capture is essential when going for a natural and photorealistic look. In one day of Motion Capture, you can capture around 10 minutes of animation for multiple characters - that’s a significant speed especially when facial animation performance has been recorded, too.

Since I work in the industry, I recorded all Motion Capture at our friends at Metricminds in Frankfurt am Main, Germany. They provide a very large capture volume, state of the art cameras and a great team that helps with every possible scenario you come up with. Often we need different props that the actors can interact with, these can be weapons, ramps or even wall segments they can lean against. Metricminds has all of this available, so it’s always a great experience to quickly try out new things and to make the job for the actors easier by building the set quickly with the most important elements. As I started working in my own company, it was clear that I continue shooting all motion capture for my projects at Metricminds as we have become a really great team and constantly trying to push the quality even further.

Another advantage of Motion Capture is that you are dealing with human actors instead of an animator having to animate a walk cycle or even more complex animations. Once you see the actors perform, you instantly get a feeling of the animation and you have a much better chance to see where it doesn’t work or where the body language needs another take on a subtle performance. You are talking directly to people and as a director, I can quickly show them what I mean by performing the scene myself for a better understanding.

Another great advantage of Motion Capture is that it’s very often an iterative process. Every actor brings in their own character and suggests new ideas. Often, I hear a different opinion or an idea from an actor and it might be something I haven’t thought about. This can quickly develop into much-improved acting of the scene and often turns out much better than you had originally written in your script. It’s often an organic process, everyone during the shoot tries to make the scene as best as possible. The actors who see everything for the first time quickly bring a fresh view upon their characters.

The disadvantages, on the other side, usually include the technical setup. Often, the reflectors fall off or actors get limited by the suit they are wearing. When shooting facial Motion Capture, the actors wear a helmet with a long stick to which a GoPro is attached to film their faces. Once a scene demands two actors to be close together, very often this gear gets inside their way and you have to think of ways to make it easier.

While the entire process of Motion Capture delivers a lot of animations quickly, the results would be almost vain without the help of additional animators. All this data needs cleaning and refinement - often, you can’t capture the fingers or get random jittering on bones. Additionally, there might be needed small changes to props the characters interact with and this often also requires additional tweaking by an animator.

In the end, it’s really a combination of both. Every animation might need different workflows, some can be completely captured, some need lots of technical setup for performing stunts, some might be so complicated or even impossible that an animator will do most of the work - it’s good to have multiple options to get the job done. 

Are 100% Photorealistic Results Possible?

The uncanny valley appears very often and can lead to many problems in the final result of your film or game. We are living in a time when technology has become so great that we are very good at replicating digital humans, but by far not perfect - especially in real-time. Even when utilizing all of the tools and skills, it’s still very difficult to make an absolutely photoreal human.

We often say it’s easy to make a character look 90% realistic, but every next percent takes an incredible amount of work and also money. The more realistic your character looks, the easier the audience can identify with it and build a connection. However, once you are at 92%, this acceptance decreases dramatically, as people get irritated by the fact the character doesn’t look completely photoreal. The subtle flaws of the character are what makes it look really weird, as they really stick out in the final image since everything else is almost perfect.

1 of 3

The biggest problems occur during the animation. Making a character look photoreal in a still frame is much easier. Showing your 3D character in a black and white still frame gets even higher realism as you can’t show translucency and other subtleties of the skin without colors. Once a person speaks or shows emotion, a huge variety of muscles move underneath the skin. Besides that, the entire skin gets deformed which is visible in the deformation of the pores and wrinkle building. To make a human perfect, all of these subtle features need to be reproduced and understood. And that's something that is especially difficult to create in real-time because all these details can take a lot of processing power. We are trying to avoid these issues by doing even more observation and trying to capture as much as possible from an actor - every subtle detail is important and can change the look significantly.

These days you can see a lot of fakes created using deep learning systems. It’s very interesting that a computer algorithm seems to be much more powerful in creating believable human facial animation than an entire squad of 3D artists. It’s very likely that AI and deep learning will take over a significant part of future facial animation productions and make this process much easier and cheaper. 

Differences Between Real-Time and Offline Rendering Workflows

Very often, when I talk to clients who are interested in using Unreal Engine for their cinematics, they ask me if the software is capable of rendering out all the render passes like diffuse, specular, glossiness, etc. When people see my cinematics, they often think that UE4 can be a direct replacement for their render engines like Arnold or V-Ray, yet the workflow is really completely different.

I worked both in offline rendered and real-time rendered productions, so I know the process of each industry in detail and understand why it’s difficult for people working on movies to fully understand game engines at first. The technology and workflow are really different, although both worlds produce 2D images on the screen that can look very identical. Of course, the biggest advantage of real-time is that it always shows you the final image. That’s something offline rendering is not able to deliver, you always just get an interactive and noisy preview. On the other hand, offline rendering produces absolutely precise and physically accurate quality which game engines are unable to achieve, even with the added functionality of raytracing.

Both worlds slowly merge together, but the way we work in real-time cinematics is still quite a lot different as we have to use a lot of importing and exporting data. Animations still get created in any DCC tool and then imported as animation clips to your character inside UE4. This process can create bottlenecks quickly. However, once that process is completed you can assemble new animations in a kitbash-like fashion and produce cinematic cuts much faster as you see everything in final quality - lights, shadows, environments, it’s all there and ready to be changed within a short glimpse. Once I started to understand the new workflows of real-time productions, it brought back the fun. I was able to see everything in final quality and didn’t have to wait multiple hours to see my render image just to realize that I needed to move the light slightly more to the left or enable a certain layer.

With the real-time technology getting better by the day, it’s also a very exciting industry to work in. In offline rendering, the technology is basically complete and there are not many exciting updates, while in real-time, there are fundamental game-changers almost every year that open the way to even more realistic results. So while the first step in working with real-time cinematics might be confusing due to certain limitations and workflow changes, it’s absolutely the way to go for all future productions. I believe, it will be utilized more and more and converging with offline rendering at some point. 

AI and Deep Learning: Future Prospects

AI-powered systems are for sure going to be the biggest game-changer for the entire industry, real-time and offline rendered. While we are only witnessing the very first baby steps, it’s very clear that AI or deep learning systems will speed up productions like no other tool.

Almost everything can be automated - creating UVs, baking assets, generating dirt and damage, adding variations to assets, etc. Those are the very first things that are rather easy to solve with AI. In future productions, AI will even improve your entire animation; body motion and especially facial animation will be greatly improved or automated completely. The power of AI is almost endless and I can imagine that the process of rendering, in general, will be gone and replaced by an AI system that outputs the image lightning-fast. An AI-based render engine wouldn’t need to render the image, it would just “know” how the final image would look like and allow you to manipulate everything in real-time. I have no doubts that AI-driven systems will change the entire industry to a massive extent.

On the downside, it’s very likely that many jobs will be cut as productions would require much smaller teams and lower costs. It’s still very far away in the future until this will be all production-ready, but I try to look at all the positive outcomes that will make my life much easier for all the tedious things that can be automated with a one-click solution. 

Erasmus Brosdau, CEO and Director at Black Amber Digital

Interview conducted by Ellie Harisova

Keep reading

You may find this article interesting

Join discussion

Comments 0

    You might also like

    We need your consent

    We use cookies on this website to make your browsing experience better. By using the site you agree to our use of cookies.Learn more