Good but the Pattern of the foam doesn't change, very disturbing.
‘Adam‘ is one of the most interesting projects Unity internal team been working. It shows amazing quality of work, great design and incredible capabilities of Unity 5. However, a lot of users wondered: what does it take to build a project like that? With huge budgets and expensive tools talented teams can definitely do something like that. What about the average Joe? What does it take to build such a glorious 3d production in the engine? We were fortunate enough to contact guys from Unity Technologies and discuss some of the techniques and production secrets of the team, behind ‘Adam’. Hopefully, this will answer some of the questions many readers have about the project.
Veselin Efremov: Our team consists of 8 people, each one specialised in a certain area.
Torbjorn Laedre, our tech lead, and I worked together in a previous company and shipped two Unity-based games together. A game industry veteran, he’s the backbone of our productions.
I joined Unity in 2014 as the writer/director and only artist of our previous demo “The Blacksmith”, which was revealed in March 2015. Robert Cupisz, who had been a graphics programmer in Unity with the focus on lighting for several years, decided to join the demo team soon after. When we needed to expand the team, I reached back to some of the most talented people I had worked with before, and that’s how our Animation Director Krasimir Nechevski and 3D Artist Plamen ‘Paco’ Tamnev came on board.
Most people on the team had some Unity experience already, and some picked it up when they started. We are always working with upcoming and experimental Unity features during our productions, checking them out and feedbacking to developers, so there are always new things to learn.
What were the main features and parts of Unity 5 that you wanted to showcase from with this amazing project? What are the technical highlights of the ‘Adam’ film?
We wanted to put to test the upcoming SSRR and temporal anti-aliasing effects, which are already available as packages on the Asset Store. In “The Blacksmith” demo, we had deliberately avoided having very reflective surfaces. But this time, the existence of these effects called for pushing them to their limits, so already at the early stages of ideation, I was thinking about a story which was bound to include lots of shiny metal.
We were already well familiar with the physically based shading in Unity 5, but while in “The Blacksmith” we explored more natural materials (wood, leather, cloth, stone), in “Adam” it was time to expand the palette with more variety, so we included artificial, industrial ones (metal, rubber, plastic, concrete).
When it comes to lighting and rendering, Unity is constantly improving its features, and we use everything that comes our way, sometimes prototyping custom solutions on top of the engine (we don’t change source code). In “Adam”, Robert implemented a research paper for real time area lights, which he got from the Unity Labs team; as well as a solution for tube lights and volumetric fog. These were powerful tools in my hands as a lighting artist. Something we always highlight when it comes to the Unity engine is its flexibility to be expanded and customized in ways that liberate and empower the artists on a production team.
Finally, with “Adam” being a cinematic demo, we used in-progress versions of Unity’s upcoming sequencing toolset, which gave us the opportunity to provide feedback and shape the tool in a very close collaboration to the engineering team which is currently developing it.
Could you talk a little about the way the whole production is organized on such an huge project?
Our production process is a hybrid between what is typically done in game development and what is done in filmmaking. These are two very different worlds, but we take the best from both of them.
For instance, just like in film, we start with a script, storyboard, look development, and pay a lot of attention to direction. When I direct the piece, I want to have all the liberties a film director does, e.g. setting up lighting per shot, shot dressing, individual control of each camera. I like to work with film and theatre actors for their creativity in interpreting and co-creating with me the characters’ personalities and behavior. I also work with a DoP and colorist, as well as a sound designer, with primarily or exclusively film experience.
On the other hand, being able to work in a real-time engine allows me to have more freedom and transgress some of the limitations in film. We bring in the iterative nature of agile software- and game development into filmmaking. Very early in production, we start with a rough prototype of the film in-engine, and we go from there, gradually improving it until the end.
We work on all areas in parallel and allow them to mutually inform each other as they advance through production. For instance, concept art can be informed and adjusted based on decisions about animation; or I can change a camera based on some really nice effect that happened to be in the background and I want to show more of it. I get to experiment a lot with lighting and mood, because I see the result immediately and there is no cost in trying various ideas.
I can change my mind about a lot of things until very late in the process, something you can’t afford in traditional film production.
How do you produce your wonderful animations, how do they integrate them into the engine?
Krasimir Nechevski: The base of Adam’s rig was done in 3DS Max with the help of Character studio. On op of that I added many procedurally animated parts. We did not want to be forced to bend metal elements so everything had to be designed to actually function mechanically. This required a lot of iterations on the concepts. The rig was then imported into MotionBuilder where almost all of the motion capture cleaning and manual keyframing took place. After a shot was finished it was brought back to Max where the procedurally animated parts were baked and the result was exported into Unity. The eyes had a really complex rig and were the only thing that was manually keyframed in Max.
Our motion capture was done at Cinemotion, a facility located in Sofia, Bulgaria, where it was easy to fly people in from the rest of Europe, while at the same time the price levels are much more affordable than elsewhere. This meant that our mocap budget allowed capturing several times instead of once. A lot of the nuance and overall feeling of quality came from the possibility to have this iterative process. For instance, one of the most challenging tasks was to achieve good sync between the actor’s performance and the camera. It required many rehearsals and wouldn’t have been possible if we didn’t have the opportunity for multiple iterations.
After we had captured the body, I made a separate take with a head-mounted camera which recorded the actor’s eye performance. I used it as a reference, to study the specific behavior and movement of his eyes, and was later able to replicate it when I was hand-keying the eye animation for Adam.
What’s your approach to character design? Do animators and character designers work together on their projects? How did you come up with that unique look with your robots?
Veselin Efremov: That took a lot of iterations. Every design decision has to fulfill an idea about the setting and story and tone of the film, so the characters are full of clues about them. In such a short format where you can’t directly explain every bit of the story, visual storytelling is crucial. We are very happy how many people got the information between the lines just by looking at the designs.
Some decisions are there for visual impact, for instance Georgi decided to cover the necks of the convicts so there would be no clutter of details close to their faces to steal attention from them.
Making the mechanisms work was also a huge task and there was a lot of back and forth there.
Georgi just published a very thorough article about production design in “Adam”, with a lot of insight into the thought process behind design decisions – very informative for other concept artists.
How did you approach the material production in your scene?
We used both Quixel and Substance. Both are amazing and it’s down to personal preference in the end. Our friends from Quixel gave us early access to their Megascans library, and that helped us tremendously, we just used the PBR-ready assets as they are, with the occasional small colour tweaks to fit our look better. Our 3D Artist used Substance Painter for most of his character and environment work. There is more information about his process in a user story on Allegorithmic’s website.
In engine, I requested some slight custom tweaks to the interface of the standard shader, because I wanted to have some more control, but in general it’s a fairly straight-forward process.
Could you discuss the way you’re building environment design for Adam?
The interior is actually the concept blockout Georgi made. We never had time to do proper models, so we ended up using his meshes. I just created a few different metal materials using Megascan textures and slapped them on top. The night before shipping, our 3D artist Plamen managed to find the time to model, unwrap and texture the doorway, and that’s about it.
The outdoors environment consists of three areas – the wall, the ramps with the platforms, and the meeting spot. Plamen made the wall, we contracted out the platforms and the broken highway, and I made the meeting spot and the backdrop — the areas containing most natural elements.
This time I didn’t use WorldMachine to create the terrain geometry, but decided to use scanned data instead. There’s so much free stuff nowadays – DEMs and Lidar point clouds, and the resolution is pretty high. Then I got these scans in WorldMachine and made the textures there, as using Google Earth data would’ve taken more time, which I didn’t have.
Having to solve the natural environment for this project gave me the opportunity to dabble with photogrammetry, which I hadn’t originally intended, but it seemed like the only way I could pull it off among all the other things I had to do. When I needed a break, I would take my personal camera (consumer level and quite old, nothing fancy), and went out in the neighbourhood for a walk. I live in the southern part of Stockholm, close to a forest, an area which has a very beautiful structure of the terrain, nicely shaped rocks and stones, small alpineums, interesting tree trunks and so on – so after every walk I returned home with a lot of material to process. One thing which wasn’t easy to find in the neighbourhood, though, was broken concrete and decaying human-made constructions – much to the Swedes’ credit!
In terms of content creation, I have to say photogrammetry is the way to go, nothing else comes close in terms of quality (if you’re after realism) and efficiency. We’ll be doing more of it in the future.
How did you create the lighting in Unity 5? Did you rely on Geomerics technology?
The interior has no baked lighting, it’s entirely lit by the realtime area and tube lights. For the exterior we used Enlighten for the big objects, while the smaller ones just use lightprobes. It’s a pretty light setup, on my 2+ year old PC the lighting of the whole project pre-computes in Enlighten for about 25 minutes. Once that is done, it’s a simple job: choose an angle and colour for the sun, and pick an HDRI sky.
The biggest challenge was lighting the two strangers, as they have a lot of dark materials, some very smooth, and also some shiny metal bits that pick up a lot of light, and that whole thing is under a bright sun.
Building an animation project is much different from just building a game. How did the new features like Director Sequencer Tool help you to build this beautiful animation?
Krasimir Nechevski: We worked in a close collaboration with the Sequencing tool team. It was a mutually beneficial relationship where we got to influence the toolset and they got to get an adequate feedback and a real world user’s perspective. As a result the Sequencer felt very comfortable and familiar. Ultimately it grew to be the powerful tool we needed to make a movie inside a game engine.
The difference between making a film and a game is that for the cinematic project we only have to care about building what will be seen through the cameras. Whereas for a game you have to tie the space together and allow players to examine it from more angles. So the environment work would have been more if it was a game environment. Closing off the scene so it can serve as an environment for interactive experience is something we’re working on at the moment, so that it can be useful to people when we release it.
Do you believe that in a couple of years most studios would work with real-time visualization technologies?
Veselin Efremov: You mean CG animation facilities and vfx studios? Yes, definitely, realtime technology is the way to go. (You might have seen the work Marza did with Unity on the short film The Gift, for example). It allows for fast iteration, which enables creative people to spend much more time exploring and trying out ideas, instead of waiting for hours for a single frame to render.
I strongly believe that talented people should not be working on tasks which don’t require talent. Any task which is repetitive or requires discipline and endurance, good planning, and persistence, is a task which is better left to a machine. Don’t inhibit the artist by forcing them to wait instead of moving fast, or expecting them to know what they want to achieve from the first sketch or prototype. Creativity comes from freedom, from trying things – sometimes randomly – and being allowed to make mistakes, which is only possible if you can move quickly to the next thing. In the traditional CG process, there is less room for mistakes and experimentation.