Fianna Wong shared some insights into the production of The Dawning, SideFX's short film created with Houdini tools: terrain generation, character animation, rendering, a look at Solaris, and more.
In case you missed it
You might find these articles interesting
Fianna Wong: Hello! I have the pleasure to introduce to you the crew behind “The Dawning”. This consisted of Simon Atkinson, Cinemotion BG (Victor Trichkov, Milovan Kolev, Rossy Kostova), Nikola Damjanov, Akshay Dandekar, Nathaniel Larouche, Bruno Peschiera, Daniel Siriste, Steven Stahlberg, Akmal Sultanov, Bogdan Zykov, Olivier Orand, and then the internal SideFX crew Emily Fung, Kyle Climaco, Attila Torok and myself. Also, thanks to Henry Dean for input. I knew some of these people from the past, some came through connections and others were completely new.
I studied at Ryerson University in Toronto and have contributed to the releases of the past few Houdini versions, since H14.
The Dawning: Goals
The Dawning is an animated short and it is an internal production. The idea is basic. We want to make some cool fun stuff and along the way, eat our own dogfood (thanks Kim, I’m using that) and get demo scenes that we test new features and break builds with; and when ready, we can distribute them to the public for knowledge sharing.
Challenges Faced by the Team
There are 17 people who worked on this project but everyone had their part to play and each task was defined and owned by each person. Speaking of how I managed to communicate with everyone… there is that song by Jamiroquai, Virtual Insanity – sometimes it was like this. Not because of the team but because of me. Keeping track of all changes to every asset, feedback, iterations, going over updates for each and every thing in the animation. And I wasn’t even the one doing the actual work. So I am really fortunate with this crew as we all are on the same page, have a similar working style, and everyone is kind of chill. It just got really intense when the production reached its peak around the time of the H18.5 release when we had to also take care of feature demos and all visual material for release (website, tutorials, etc.).
For most people, the main challenge was probably juggling the “daytime” workload and this side project. But it was also what made this project possible, timeline-wise anyway. We didn’t have anywhere to go because of the lockdown, so now there was this ‘free’ time. Maybe it sounds bad, but this is what happened.
There were also other challenging things like using Houdini for character animation for the first time, Nathan rendering with Mantra for the first time (he comes from VRay and Redshift), interns coming to Toronto to experience the SideFX HQ and everything around it but ultimately having to work from a small bedroom, internet (mine is the worst) and power outages (even with UPS); and to wrap it all up, this project was also my first time doing animation. But even with all these factors, it was super cool to work with everyone. With all the fun, intensity, struggle, there was never a dull moment, and I think everyone is happy with the result.
The terrain work was done by Nikola Damjanov. Emily Fung and Kyle Climaco put some small cherries on top with small rocks and grain layers, and we also had some satellite scans but those comprised of ¼ of the work. Nikola did the hero terrains - from foreground and midground (blended into the background) and the super closeup shot where the astronaut is looking through his visor.
Nikola Damjanov: All the terrains in the film were predominantly made using Houdini's heightfields systems, but almost every shot had a unique problem that was solved using some of the other tools.
Usually, a terrain feature would start off from a very crude proxy geometry which was modeled to fit the camera movement and character animation. From there, we would use several layers of HF Distort to break up those simple forms and add visual interest.
The next step would be to work on a shot specific features.
For really close-up shots where the astronaut is walking on the surface, we utilized the SOP modeling and fracturing tools to simulate the look of a cracked dried ground. Then we project those details to a new heightfield and mix it with the original one. That way, we can get the benefits of both a procedural terrain and a hand-crafted one.
Some of the shots required detailed cliffs to be made – they were produced by first detaching a piece of the original heightfield, where the cliffs should be. That piece of the heightfield was voxelized, turned to a mesh, and treated as such. Surface noise and remeshing nodes were used to get the initial rock look but then the mesh was sliced into individual strata layers with varied thickness. Every strata piece was then processed individually, getting another layer of local variation and edge damage. All the layers were combined into a single VDB where we added the high-frequency details using noises.
For far-away landscapes, we even used real-life scan data of mountains and deserts, thanks to the super useful MapBox node.
No matter what the specific shot was, all of the approaches to terrain generation were finalized with similar steps. Once we had all the elements in place, we tied them up together using several layers of erosion and slumping, simulating the passage of time and accumulation of sand. Heightfield Scatter was the final touch with which we distributed medium and small rocks over the surface.
Once the terrains were finished, everything was projected to a high-resolution heightfield and then converted to polygons. Depending on the shot, those models were segmented into square pieces – usually around 8 – and every segment was unwrapped to its unique UDIM tile.
With meshes and all of the terrain masks exported (i.e. flow, debris, strata, etc.), we used Substance Painter to texture the terrain, as we could heavily rely on its Smart Material system to gain speed and consistency.
The astronaut was designed by Steven Stahlberg. He worked with Bruno Peschiera, who created the 3D model of the astronaut and the textures for it. Don’t ask me why there is the name R.Scott, we were calling him Gagarin in the beginning (Ok, Bruno admits that actually he’s giant The Office nerd and it is a reference to Michael Scott).
After we got the model and textures from Bruno, Bogdan Zykov took it to Houdini and rigged the character, completing with a little astronaut icon for the HDA. This was pre-H18.5, so the rig is an OBJ level rig (and not SOP; we could not use KineFX motion retargeting yet).
After that, the rig went to Cinemotion guys, who adjusted it according to the needs of their animator. We went back and forth with some skin weight changes, and we also studied a lot of walking/climbing references and references of moonwalks from the 1960s. I worked with Victor Trichkov, the owner of Cinemotion, and iterated on the speed and motion of the astronaut until we all were ok with the result. Milovan Kolev was the animator behind the astronaut (he is a Maya guy, so he was using Houdini for the first time) and Rossy Kostova, Cinemotion's Character TD, helped him with the rig and animation in Houdini. I have no illusions about the challenges that come when using new software. Of course, every software developer would like to boast that their application takes no time to switch to. But we all know about muscle memory, existing software logic, hotkeys, and such.
Anyway, there were several iterations between Cinemotion and Bogdan in order to update the rig as needed. Once we had the astronaut animation, it was given to Kyle Climaco to do grains sim in the close-up feet shot.
Houdini Tools Used
With the exception of the Scatter and Align SOPs, and the new pyro tools from the recent H18.5 release, the rest is ‘vanilla’ or ‘old’ Houdini (H18.0), rendered with Mantra. “Vanilla” as in, nothing extra required outside of the product itself. The project goal was just to use Houdini to make an animation. So it was used for nearly everything – creating the terrain environments, layout, rigging, animation, FX (atmospherics, dust passes, footstep grains, dust devil, large smoke plume), shading, lighting, rendering. You can find out more about some of these aspects at www.sidefx.com/dawning; there are presentations and tutorials on various elements within the animation.
We did not use Solaris for the creation of The Dawning. We did use one shot to test one of the new additions to Solaris (Render Gallery and creating Snapshots) but the animation itself was done outside of LOPs and rendered in Mantra.
Speaking of the concept of Solaris, you can look at it as another sandbox in Houdini, but specific to the data format created by Pixar (USD). Solaris is that sandbox, it lets you bring in USD data (which can be authored in Houdini) and that data can be rendered out in any Hydra delegate (any renderer that can read USD files). I’m super simplifying here. Within our customer base, Solaris speaks more loudly to larger studios with big pipelines that process a lot of data. USD allows the data to be broken down to the minutiae so that those assets – whether they are (for example) 3D models or FX elements, can be seamlessly updated with minimal impact to others also working on the same scene. This benefit is passed down to layout and scene assembly, where you can use layout tools to dress up your scene in a non-destructive manner, build many layers of hierarchy and have very specific control over each thing in each layer (shader overrides, instantiation, variation). We work closely with large studios, so development is driven directly by their needs in this area.
When we are able to present publicly the specific examples of how USD is used by a customer, we will certainly be happy to share this information and know-how.
The rendering for the seven shots was done by Nathaniel Larouche. The work on rendering started after the initial animatic was created, with locked-in camera cuts and the desired timing. There were several different versions that Nathan explored playing with tints of green/blue/orange tones, shadows, and highlights per shot.
Nathaniel Larouche: The workflow for digital environments can change drastically depending on the environment type. For The Dawning, the environment was a large desert with lots of haze and atmospheric dust blowing through it.
We baked all of the lighting into the textures for the deep background landmasses. It’s always a good idea to bake down as much lighting as possible in the background. This allows you to focus all of your rendering resources to the foreground where the viewer is looking. We usually figure out the lighting direction early on with a couple of low quality renders and then move into high-quality bakes once the sun direction is decided. The nice thing about baking in lighting is how easy the final asset can be shared between matte painting and compositing artists. If needed these landmasses can be brought into Nuke and repositioned or even texture augmented without the need to re-render. Another thing we did that helped dial in the final look was to create large ID maps to control the color and brightness of forms seen on the terrain. We ended up really cranking the brightness of the dried-up riverbeds in the final comps with these IDs.
For midground and foreground terrain we rendered them fully in Mantra with a simple sun and sky setup. Each light was output as a separate AOV so we could easily adjust their influence in the comp. We had a standard baseline lighting look that didn't change once the high quality renders started. Any time a shot was rendered again it was a result of an animation or asset update.
The most important environmental element was all the drifting dust that kept the environment alive. For distant background elements, we used pre-rendered dust sims positioned on cards in the comp. These were art directed into place and graded depending if they had to absorb light or not. The FG dust was a mixture of rendered volumes and practical elements. What’s great about the real volumetric dust is that you get shadowing cast into them from the character and terrain. All that was left was to mix some practical elements into its alpha to give them a little more texture.
We worked with AMD who generously provided Nathan with a beast rig (it is a state-of-the-art workstation based on the AMD Ryzen™ Threadripper™ PRO 3995WX processor that has 64 cores and 128 threads). If it were not for the fact that there were other artists working on animation and doing their own iterating, Nathan could have simmed and rendered everything on that machine alone, within a reasonable timeframe. For us internally, all the artists who worked on this animation had machines powered by AMD Ryzen Threadrippers, and we also have the internal SideFX farm to send tasks to. Having our farm was necessary since this animation was not the only thing we worked on during the course of the year. But definitely, it was critical to have a strong machine to locally work and iterate on shots before sending things to the farm. Uploads and downloads can be slow, or you may need to run some tasks locally in the end due to some errors. So having these strong CPUs was a godsend.
Choosing to Learn Houdini
Houdini is a tool that one can choose to use, amongst many other tools available in the CG space. Can you do everything fast in it? It depends on what the task is and also your logic. If you do side-by-side comparisons with other software packages on an assortment of tasks, sure you win some and you will lose some. If you zoom out a few levels, and you need to do more complex things like simulations and additional effects that you haven't anticipated, you don’t have to worry about looking for an outside solution because it's possible in Houdini. And if you don’t know how to do it, there is a load of information online. And there are more users than ever before who can give you pointers or even kindly share a *.hip file. And if you really get stuck and have no clue what to do, you can try to ask developers on sidefx.com forums and get some help there. There are also Facebook groups, Discord, OdForce – generally, people are willing to help you and give you some tips unless you are raising your hand without genuinely trying first. Everyone who does 3D knows that 3D is not easy. And we know that Houdini has this added stigma of being “mega hard”. We are working on changing this but it doesn’t happen overnight.
What’s the future of Houdini? We have a direction we would like to move in, but that direction is also being shaped by industry needs and changes. You always have to watch what people complain about, what people wish for, what new techniques do people experiment with, how things are being done elsewhere and I can't even speak for developers but they are always keeping an eye on whitepapers. And then there is also direct communications with customers. So we have a generous pool of things that influence the direction of the tool. I don’t have some clean answer about this, but only that our eyes are watching. And of course, I cannot tell you what is the next big thing coming so you will just have to wait and see!