David Waldhaus, Stellan Joerdens and Marcell Füzes shared the way they’ve built, textured and rendered the amazing living city.
Our team consists of 3 Members:
Youngest in the team works fast. Enthusiastic about texturing/look-dev. Reason why he was responsible for textures, colour scheme and overall look. Did the lighting, made sure the mood was matching the concept and did the compositing/colour grading.
Has the most experience with creating trees and dynamics. Which is the main reason why he created all the vegetation and was responsible for the smoke and water simulations. Has feel for organic modelling so he created the humans and birds for the crowd.
Loves to automate and streamline processes in a pipeline. Wrote several smaller and a few bigger scripts to speed up workflows and fix issues which have occurred during the project. Experience with rigging and Golaem. So he was responsible for creating crowd simulations and the basic rigs
for the Humans.
We study at PixlVisn Media Arts Academy in Cologne. All of us are working on our demo reel and are finishing at the end of February. We are still studying and the project was created during our demo time where we had to work independently on our projects.
Inspired by the project India from the matte department we always wanted to combine our individual skills to create a big project were each of us could focus on their part.We created a time table, which included different deadlines for Blockout, Asset Creation, Scene Assembly, Dynamics and Lighting, Rendering and Compositing.
First we created the blockout and animated the camera.
For the Asset Creation we created a excel sheet and assigned assets to each other so we could work on them independently.
Scene Assembly: We splitted the city into different parts on which we worked individually and
referenced those scenes in one Master scene to see how everything is coming together.
Lighting/Crowd Sim/ Dynamics we worked on them concurrently as these were the last steps before rendering.
Main part of work:
- Creating variate of Assets which match up with the concept
- Comping all the different passes and grading to get a desirable look
- Assembling the scene with all the Assets
- Architecture infrastructure
Painting over the concept to identify different architectural buildings. We asked multiple people about the origins and the historic dating of the buildings on the concept so we can gather more precise reference images
Splitting the city into different architectural groups like towers, houses, unique buildings etc.
Created base layout of the city infrastructure in Photoshop and used this as a base for the block-out and the camera.
Looked at real world City infrastructures on Google Eearth. Our main reference was Florence, from which we took snapshots and used them as reference for a more detailed layout for different city blocks.
main building is the “jahangir mahal” in orchha, India.
Some other elements from that architectural style were reused on different asset > “mughal architecture”
We had no experience with Houdini, but we thought about a procedural way of laying out the City. As we worked with Maya we tried using Mash to layout the city. Unfortunately this gave use a uniform look so we decided to lay out the City manually.
We created several city blocks out of our Assets so we did not have to lay out each house individually. This way a lot of the city which did not have to have a exact layout could be done in a reasonable time.
We had 3 main categories of Assets: Hero Assets, General buildings (mostly houses) and smaller props.
Hero Asset were modelled so they could be subdivided at render time, also they had several UV patches and their Texture resolution was the highest from all the Assets.
General Buildings: Were not subdivided at render time, had only one UV patch and a lower resolution.
Small Props:All smaller props were put on one single UV patch and the texture resolution was very low as those were very small in the render.
After a certain distance we used a matte painting to replace the buildings.
Materials & Textures
We used photo scanned textures from poliigon.com as a base for texturing
From those scanned materials we created a preset bank of smart materials in substance painter, which were easy to reuse on multiple assets in the city. Having this preset set of materials also helped to keep the style of the assets consistent, even when different people worked on the assets.
All materials used different ambient occlussion and curvature masks to create breakup inside of each material, and to add edge dirt, moss or simple colour variation using the great library of procedural grunge maps given in substance, to mask out areas in which the underlying bricks can be seen.
For assets which were positioned near or in the river itself, we used the world position bakes from substance & mask generators to create top down gradients to mimic water on the assets, coming from the river even though a lot of the reference buildings where made just of sand stone, we had to create enough colour variation & breakup in our materials to make every house look interesting & unique, whilst still maintaining an overall fitting & consistent look on many assets we added a soft white edge highlights and on top of that a hard black edge shadow to make the edges pop and also multiplied the Ambient Occlusion to add more depth to the asset.
Crowd simulation tech
After creating a Golaem ready file, creating a base crowd simulation is very straight forward.
Created variation for the humans which could be used in Golaem to create different humans.
Challenge was to populate the whole City with the least effort. As we had different city blocks we created only for those blocks the crowd simulation. With a Python script we duplicated the simulations off setted the time and matched the translation of the already laid out city blocks.
For the more unique parts we had to create the Crowd simulation individually especially were the camera was getting close to the ground.
A basic workflow for creating a simple crowd simulation is the following setup.
Create the three nodes: GoTo, Navigation and Locomation add all this under a Parallel node. This way your crowd avoids each other and goes to a random location
Water and Smoke simulation
We used the Bifrost system in Maya to create the river. Since we did not need a highly detailed ocean simulation, we chose against the standard bifrost liquid system and instead used the BOSS plugin.
Using a combination of different deformers, which are controlling the behaviour of the water we were able to create a subtle river simulation.
We used the wind control to set the flow of the river in the right direction.
We added low poly collider objects to simulate bow waves of the moving ships.
To create the feeling that people were living in the houses we added chimney smokes.
All smokes were created using the Maya Fluid System.
To avoid having repetition, we cached a long sequence of simulated smoke, and used the offsets in time and scale to give every smoke an individual look.
Added a high frequency normal map to breakup the uniform river surface. Shader was quite basic, used only transparency and basic colours & spec to define the physical properties of the water (IOR is important).
Arnold 5 uses interesting depth scattering methods which were used to create the nice colour variations in the deeper parts of the river.
The birds were actually done with Golaem. I created two simple animations for the birds. Flapping and gliding animation. These gave a slight variation for the Birds as they were flying.
In Golaem there are several nodes which help you create animations in 3D, you can set it up so they also avoid certain objects in our exapmle a tower.
We rendered on over 40 PC’s in the school so we had to come up with a way to quickly copy the project and start the Render.
Creating a batch file with 3 commands.
1. Run a python script
2. Robocopy to copy the project on the PC
3. Maya batch command to start the render
In the render options you can add scripts to run after/before a frame or when the render is done.
During rendering we got progress on the server about each PC and after the render was done the finished frames were uploaded to the server. Wrote a simple UI were you could download the frames from the server check the progress and open to log files of each PC.
- Multi pass rendering > combining all passes in nuke
- Added all elements that were rendered separately together, like humans smoke etc
- Grading played a huge role in creating the desired look:
- Increasing the brightness of highlights using the spec aov
- Creating more colour variation between buildings & grading separates objects using crypto matte
- Crypto matte is a amazing tool to create object and shader based masking.
- Creating 3 different versions of the final grading for each segment of the camera move and fading them together afterwards to make every part of the move look great
Adding the matte painting:
- Put together a 3d projection setup using the different layers from psd
- Grading each layer to create a smooth transition between painting and rendered material
- Generating custom luts using cms test pattern to match the exact look from psd
- Adding volumetric which were rendered separately, improving them using the volume ray node In nuke and merging the final result in top of the Beauty layer of our shot
- Adding motion blur & lens distortion to create a realistic look
- Adding some glow / bloom effect on highlights
- Removing noise / flickering in each pass separately (firefly killer and denoiser node in nuke)
- Painting the matte painting in Photoshop
- Gathering real world reference from sky’s/cities of the given mood
- Combining different plates to create a fitting matte painting
- Matching values from the reference images
- Importing Photoshop file into Nuke, breakout layers in the read node.
- Put together a 3d projection setup using the different layers from Photoshop
- Grading each layer to create a smooth transition between painting and rendered material
To create a smooth transition between 3d assets and matte painting, we added fog in the background of our rendered material in Photoshop using the curves adjustment and lifting up the values. Since this fog only worked in 2d until now (the mask was only painted in the 2d psd doc on a single frame), we had to think of a way to mirror this effect in our 3d scene in nuke. With the help of the p_ramp gizmo, we were able to create a 3d mask of the background area of the city using the position pass of our renders. To match the exact grade of the fog we created a custom “fog lut” using the cms testpattern.
The lut creation workflow:
- export cms Testpattern from Nuke in desired bit depth (write node)
- import cms into psd
- apply curves
- export edited cms
- import edited cms into nuke
- hook the edited cms into a generateLUT node and write out the lut
- apply the lut using Vectorfield3d
- Generating custom luts using cms test pattern to match the exact look from Photoshop
The whole vegetation in the city was build with with Speedtree. We created a library of different types of trees, smaller plants and building plants (ivy growing on house walls), so we could reuse them in the city and also had the possibility to build a whole forest. For trees which are closer to the camera we added some dynamics and gave them subtle movement.
Interview conducted by Kirill Tokarev.