W A K E: Real-Time Workflow in UE4 for Animated Short Film

W A K E: Real-Time Workflow in UE4 for Animated Short Film

Jeffy Zachariah talked about the production of his short animated film W A K E: work with Sequencer in UE4, camera setup, asset production, and more.

Introduction

Hi. My name is Jeffy Zachariah, I am from Trivandrum, India. I am a CG Cinematographer and Director, and in the past, I have also worked as a Level Designer and Lighting Artist. I worked at a few indie studios, tech companies such as AR Wall for Hollywood movies such as Ready Player One, AAA companies like Ubisoft, and recently with TiltLabs on kickstarting their Virtual Production division.

Here are some of my previous works: 

1 of 4

I began my early days as a technical artist, primarily focusing on making mods and prototypes with UDK. As I got into UE4 beta, I began to dig into level design and worked on some small projects then. Once Epic started to release their content for free, such as the Infinity Blade assets, I took this as an opportunity to dig into set dressing, learn level art and lighting, and such. This interested me and I spent the next few years practicing this art; along the lines, I was introduced to Modo with which my appreciation for 3D modeling grew.

Later, I felt that I’ve been working with static scenes for a while now and decided it was time to take the next big step. Right now, I’m focusing more on making cinematic content and real-time CG cinematography.

W A K E: Idea

I look at animations as stories or expressions said through motion. You can convey a story without even needing to use speech or dialogues by using just actions. The phrase “actions speak louder than words” is true here. A posture can talk louder about the emotions a person is going through than the person actually talking about their emotions to the audience. This is what its beauty to me. This isn’t something that is limited to the animation medium, any film can use this approach, but we tend to see more such themes in 3D animations.

I’ve been thinking about the possibility of doing a short movie, or a music video using real-time rendering in UE4 for a while. My plan was to come up with a theme where I could convey expressions with minimal actions. I’ve always been a fan of sci-fi, robots, and sorts. And so I turned to the designs of one of my favorite concept artists, Rafael Amarante, for this project.

I was looking towards very subtle motions because the theme I was after was very bleak and grim. My friend, Amir Buganov, who was also the animator on the project, provided the necessary animations that showed this well.

1 of 3

Working in Real-Time

The character was around 1.6 million polygons in triangle count, so we created a proxy mesh; we rigged it and animated the skeleton and then transferred this skeletal animation onto the original mesh, just like how film production works.

I think the most important aspect was that we could review our animations in real-time in the engine in the final lit environment and see how things around the character interact with each other. This saved a lot of time, - we were able to review playblasts, then take them to the renderer, etc. Reviews and iterations were simultaneous and that was a huge advantage when it came to saving up on time.

1 of 7

A good example would be how we were shooting. The whole movie is slowed down to about 55% real-time speed, it “moves” slowly. I was not sure how to animate it and then match the camera with the movement; I also had to keep in mind that environment effects such as dust FX, light FX, etc. played at this slowed speed, too. So, I decided that I’d slow it down in post-production during editing. As a result of this decision, I had to do multiple takes per shot and cross-check in my editing tool to see if I was getting the desired effect I needed. And because I was working in real-time in UE4, I could churn out takes every 2 minutes without an issue. I could identify where I wanted improvements in my camera animation on the spot without having to wait for renders or anything.

An example of Shot 2 Raw footage  - I took about 9 takes to get this shot the way I wanted to and it was all done and finalized within 20 minutes, which is 2 minutes per take roughly.

Shot 2 Final footage:

Specifics of Animation Work in UE4

UE4 is a real-time rendering engine and can be considered primarily for game production. It can be used in animations to the extent of game animations or cutscenes. With the introduction of PBR, game content has moved closer to CG quality in a very short time. And now with Ray Tracing making its move into the real-time rendering space, it's only a matter of time before real-time pipelines can overtake the CG pipeline I believe.

UE4 doesn’t provide any animation tools as such out of the box. The control rig is a new addition in the latest release but that’s just a work in progress if you ask me. I would still stick to my DCC app for my rigging and animation work.

However, what UE4 does provide me with is some of the best robust tools for cinematic production in Sequencer. Sequencer is a successor of UE’s legacy cinematic tool called Matinee. Sequencer does allow you to work on a shot per shot basis inside a master sequence that happens on the same level non-destructively. All the shots in the short movie were lit and shot on a shot-to-shot basis. And this was easy and controllable with the help of Sequencer. It lets you control not only your camera work for each shot but also lighting, background elements, fx elements, anything you name.

Here’s an example. This is the lighting setup for Shot 5:

And here is the final output:

Shot 6 will have a different lighting setup to suit the scenario in which the light from the sun ball hits the character. Here is the light setup for Shot 6, which is literally the same map and character at the same position:

And here is the final output:

This was all done seamlessly within Sequencer including animating the light rays to fall on the character.

There was even more that was done with just Sequencer alone. The power bulb animation was done entirely in Sequencer by just simply animating the emissive values.

Here is a demonstration video:

Camera Work

The camera work in this short film was kept simple. Cameras were animated within Sequencer and animated only on the translation transforms.

They always maintained a fixed focal length and distance. This is because in every shot, there was always one subject that was the focus of the specific shot. This was either the robot, or the sun ball, or the robot’s energy bulb, etc. The camera was always focused on these elements and told the story from that point of view. Because of this the camera didn’t move very much and always stood mostly still, except for a couple of shots during the sun-ball reveal and towards the end where a robot walks away for some dramatic effects.

For smooth camera movements in some of the panning shots, the curve editor in Sequencer helped a lot to smooth out the motion from one position to another to keep the camera’s pace in sync with what was happening in frame.

You could always push and see what you can do with your camera. Digital cameras can go to spaces where a physical camera cannot, and we have seen this done in movies many times. But in my view, if the camera movement is too inhuman, the believability of the shot in frame is lost. As a personal preference, I like to position my camera as if it’s a viewer in the scene and keep things grounded. This was something I had in mind very early on. It is not just fancy rendering but also believability of camera work that sells something to the audience and makes them connected to it. Too much of flying around will make the CG feel very obvious and the touch with reality would be lost.

Asset Production

The character in the short movie doesn’t use any normal maps of any sort. It does however have some micro normal details for things such as scratches done in Substance Painter. Apart from that, all details are modeled geometry. This was quite important because since it was a cinematic character and the camera shots at times were too close to the character, some of these details had to shine. Because of that, I did not create a low poly character and stuck with the high poly concept model that was done by Rafael Amarante.

The models also used UDIMs to get most out of the 4k texture resolution limit that we could garner from Substance Painter.

There wasn’t much time dedicated to the texturing process. One of the aims of the project was to get the major part of the work done quickly. The texturing process had to be streamlined to meet this requirement. To achieve this, I went ahead with creating my own smart materials in Substance Painter and using Megascans textures.

1 of 5

After this, the mesh object was exploded and taken into Substance Painter where it was baked to get additional maps for procedural texture generation like curvature, position, etc.

This was because the mesh with so many smaller mesh parts would cause artifacts or overlaps when baking the mesh maps. The smart materials were then dragged and dropped on respective material sets on the mesh and the result was quite decent for the time frame given.

The environment models also followed a more or less similar approach, but it had a bit more hand-painted things in there.

Challenges

I think the biggest challenge in this project was to get the best quality output using real-time workflows that can match the current-day CG quality. And for this, we followed a CG production pipeline when it came to asset production. Our character mesh was super high poly with 1.6 million triangles. It had around 59 materials on it, each with its own set of 4K maps. To get the best out of our time we had to streamline our process by using smart mats for texturing and so forth.

The switch to real-time rendering saved a lot of time that otherwise would have been spent on iteration and rendering offline. The ability to see animations in a lit environment and review changes instantly was a game-changer.

However, we had a plan right from the start that we mostly stuck to throughout the production. I had prepared a mock storyboard to figure out my shots and camera angles very early on. This helped figure out our animations relatively easier and then frame our compositions and cinematography without much hustle.

1 of 3

If you have a clear idea of what you want to achieve right from the start, it would help during the course of production because, in my view, you lose a lot of time when you haven’t figured something out. The sooner you realize your vision, the faster you move into getting it achieved.

Jeffy Zachariah, CG Cinematographer & Director

Interview conducted by Kirill Tokarev

Keep reading

You may find this article interesting

Join discussion

Comments 1

  • Anonymous user

    wow amazing

    0

    Anonymous user

    ·3 months ago·

You might also like

We need your consent

We use cookies on this website to make your browsing experience better. By using the site you agree to our use of cookies.Learn more