Unreal Engine Generalist Sam Goldwater has shared a comprehensive breakdown of the New Moon cinematic, explaining how it was created, animated, and rendered using Blender and Unreal Engine 5.
Hi everyone! I'm Sam, from London, UK. My background is directing live action commercials, which I've done for the last decade. CG is an exciting new scene to me – I picked up Blender for the first time during the pandemic when international travel for my live action work was suddenly impossible. To my surprise, I quickly fell in love with 3D for the incredible creative agency it affords.
I appreciate this isn't new information to readers of 80 Level but you have to understand: for someone who's spent many years accidentally setting off location smoke alarms with haze machines, put thousands of dollars into lighting rigs to fake the sun when winter daylight hours are short, or had a crucial mountain landscape shot for a car commercial in rural China written off by one unseasonably foggy weekend; the freedom to invent anything you can imagine on screen with seemingly no limitations is... Well, it's pretty cool.
In February 2023, I took part in the Unreal Worldbuilding Fellowship, a scheme that brings together around a hundred artists for a three-week program of online workshops and lectures. The goal for each student is a single completed environment, determined by a series of randomised variables that you receive in the first week and then work towards over the duration of the fellowship.
My randomized brief presented these constraints: "a kitchen/restaurant in an arid environment, set during a thunderstorm, built out of concrete, set in the 80s, texturally run down and must include a celestial event surprise twist". I started putting together some ideas in PureRef...
I’ve never been to these Irani cafes in Mumbai but when I saw pictures I was struck by the incredible textures of these spaces and the combination of chic old grandeur with humdrum, modern detritus.
I’m not an experienced environment artist – in my past projects I had disguised a lot of art issues in cuts to new angles. However, part of the Fellowship brief was that the environment should be shown in a single continuous shot – there was nowhere to hide.
Using this image as reference, I started to block out the scene in Blender. Very quickly I realised I was digging a terrible geometry hole where architecturally I was completely winging it – nothing was standardised and the further I went, the messier it got!
So, I started again. This time I paid attention to how modular interior pieces are standardised by professional artists, and pretty soon I had a healthier basis for the space.
Of all the reference pics I’d found, it was this one strangely that I kept coming back to, I loved the over-bright windows motivating the highlights on the frame right surfaces and the quick dip to dark shadows under the cash register desk. I liked the checkered pattern tablecloth – something many of these cafes have in common. While this image is at first glance really completely unremarkable, it is full of all kinds of detail that tell a story about where we are – to me, there was an allure in that.
By the time I had the room to this stage, even though it was still just a shell, I felt like there was some imaginative potential here.
As the build progressed and I made a million more choices, things shifted away from the reference, particularly in lighting, but by that point I felt secure with the direction things were going in.
Prior to the Fellowship, I’d been thinking about underplayed cinematic approaches to authoritarianism. How can a simple waiting room be charged with the atmosphere of an oppressive regime? Cigarettes, the slow rotation of a ceiling fan, distant sirens – I was interested particularly in how to conjure that atmosphere without the obvious signifiers.
The brief required a "celestial event surprise twist", so I started with the idea of a comet speeding towards Earth and we're seeing an authoritarian state proclaiming that there is no comet. I used Stable Diffusion to test some poster concepts and placed them into some free picture frames from Sketchfab.
All the prop material work in Blender for the project naturally needed baking to PBR maps to send to Unreal. For this I found the addon Simply Bake to be brilliantly robust – the UI is clear, it’s well maintained and has a great feature set without being overly complicated.
On my first weekly viewing with my classmates, I was in for a surprise – the authoritarian regime and the opposition to the comet that I tried to explain through these posters was completely unclear. No one got it. I needed to simplify.
I decided the moon was a better subject than a comet – more easily recognized and weirder for the purpose of this story. I realized the intentions of the posters didn't even need to be clear – if they all seem to feature the moon, even for reasons we don't understand, that will be enough setup for the payoff – finally seeing it looming over the Earth 100s of times larger than it should.
Finally, there was one other reference image I loved and wanted to find a way to dramatize.
Before starting in CG, I expected that this process would completely remove the surprise and spontaneity of live action sets, but actually I’ve found that isn’t true at all. The best arch asset I found was a 3D scan from a graveyard, and from the early versions of the set this part felt so gothic with the rain and the character waiting for the camera, that it made me think of Charon ferrying people across the Styx between the worlds of the living and the dead. I didn’t plan that, and it was exciting to see it materialize in the viewport.
From early on, I knew I wanted to build a sense that something was wrong. We don't know what it is, but our camera moving inexorably through this world can introduce us to further proof of this feeling and then hopefully at least partially answer our questions by the end.
The camera needed to feel omniscient, more powerful than the characters who drift in and out of the frame. David Fincher sometimes describes wanting his camera’s motion to feel inevitable – that felt like the right intention for this piece.
The timing of the cam animation took some iteration. We needed to see the points of interest along the camera’s path for long enough for us to parse, but perhaps not quite as long as we’d like. This way I hoped we’d retain some enigma and maybe a sense that these events are really happening and will continue outside of the camera’s view.
In CG we have the unbridled luxury of having the camera do whatever we can imagine, but in cases where the goal is a sense of physically plausible camera motion there’s one thing I think it’s easy to miss if you haven’t lugged one around before…
…Cinema cameras are heavy! Even a very lightweight setup like this could weigh 15kg or more. On a physical set, moving this chunk of metal around in an ambitious piece of camera ‘animation’ takes a team of seasoned professionals planning, creativity and specialised equipment to perform. Not only should the weight seriously affect a camera’s moment to moment inertia, (read: our keyframes) but it’s useful to imagine what physical method would be used to have it do what we have it doing.
For New Moon, I pictured the camera being driven by a telescopic crane on a short track; it retracts in through the window, dollies through the interior, then extends out towards the archway.
In the same way realistic materials have imperfections, even shots driven by top tier grip equipment will often have impurities to their motion – I used a loop of Cinemotion handheld standing idle at an almost imperceptibly low scale to represent the imaginary head operator’s tilt/pan micro-adjustments.
It’s tempting to add more and more keyframes to try to manage very specific timing for camera animation but on this project I realised the better method for this kind of glacial movement was to reduce the number of keys and instead use weighted tangents to carefully manage the splines, usually individually per axis. I imagined the way a camera operator handles a geared tripod head – rotating each gear independently for the tilt/pan, or for us, the X/Y rotation.
This is the camera’s translation/rotation keys across the sequence.
Having the main character fall in and out of frame wasn’t only a creative choice. I planned to mocap my character action with a Rokoko Smartsuit Pro V2. With only a small space to record my action at home, I needed to avoid showing characters move over large continuous stretches – where I’d be out of the runway in my real world living room! Letting the main character fall out of frame meant I could divide her action up and avoid the complication of covering a long single motion of her getting from the table to the archway exit.
From a witness cam, we can see how the main character’s animation is divided into three actions. For this project I captured almost everything with the suit, but the run action for the character outside, I sourced from Rokoko’s Motion Library, which has plenty of great mocap elements free or not too pricey.
The low cost motion capture options available to us in 2023 is one of the most liberating elements of the workflow for someone like me with no animation training. Being able to capture action at this quality on a whim and having it running in engine a few minutes later is magical.
There isn’t a lot of facial animation in New Moon but so far in CG I’d always wrestled with getting even believable timing of the head (from the suit) with manual timing of eye blinks and direction.
Using Facegood’s tracker/retargetter Avatary and their low cost helmet, the D2, even before cleanup once the basic head and body action was synced, it really felt like the character was suddenly alive.
It’s subtle, but to me, the correct eye/head sync was a crucial milestone in the believability of MetaHuman assets with an indie mocap approach.
The cloth sim elements – the main character’s jacket, the window couple’s headscarf and shirt, were assets I bought on CG Trader and simulated in Marvelous Designer. There’s some trial and error involved in this process, handling the pinning of the cloth to the character mesh takes some iteration to have it look right and not randomly fall off them as in this image. I’m still a novice with Marvelous, but the ‘free’ detail in character motion with clothes folding and moving around the characters was such a gain to me that it was worth easily the time experimenting.
I’d planned to render the project in UE5’s path tracer, but the Fellowship delivery date was fast approaching. Render estimates were already at 48 hours or more on a 3080 for my planned 1900 frame sequence – valuable working time I'd be giving up on rendering. The scene was still at an early stage at that point, but I was so impressed with Lumen’s GI approximation that I was sold on staying in deferred rendering. For sure, there are cases where the benefits of path trace are undeniable, but with this project it was a relief to know I’d have more working time and less rendering time with so little lost in lighting quality. It brought the render times down from over two days to… forty minutes.
The American writer George Saunders has a great analogy for the writing process. Imagine if you bought a house, and prior to moving in, you went on an online shopping spree and bought everything you wanted. If we were invited over the day you moved in, we’d understand something about you from the things you’ve ordered - what they are, their colour, arrangement. Compare that to what the house would look like after ten years of you living there though - now what we see is a much more refined representation of your taste through your innumerable combined choices, conscious and unconscious, made over time. On day one this place could belong to lots of people, ten years in though – it is absolutely unique. Saunders compares this to a first draft, and the many re-writes it takes to get to something truly your own. ‘Writing is re-writing’. This is the iterative process.
The old saying goes that you make a film three times - on the page, on set, and in post. To me those phases still exist in 3D, but the lines are blurrier and vastly more malleable for the kinds of iteration - re-writing – that Saunders describes. As always, there are things I’d do differently, those are the lessons to take to the next project. What I love about working in this 3D world is that whatever that next thing will be, the process towards it can start as soon as I finish this sentence.
You can check out more of Sam's projects by visiting the artist's Twitter and LinkedIn pages.