https://cgifurniture.com/ Awesome article!
This is techno-sorcery!
Unite India is here: https://unity.com/event/unite-india-2019
An amazing team of exceptionally skilled artists shared a breakdown of the production process behind Assassin’s Creed inspired scene. Bao Ngoc Vu, Lenz Monath, Sarah Mai, Alexander Raab, Rik Joanmiquel, Peter Whiting, Tim Verhaert and Giacomo Frega showed how the beloved franchise could shine in Unreal Engine 4.
Bao: My name is Bao Ngoc Vu and I was responsible for the Idea of this trailer. I studied Art & Animation at the Games Academy in Frankfurt, Germany and joined Yager in 2014 where I met some of the people I collaborated with on this project. Currently, I work as a Character Artist at Ninja Theory in Cambridge.
Alex: I am Alex a senior Technical Artist at Crytek Frankfurt. I studied at the Games Academy Frankfurt with Bao and work for Crytek since I graduated. I worked on Ryse: Son of Rome, Back to Dinosaur Island VR Demo ,Robinson the Journey and currently working on Hunt.
Rik: I come from Barcelona, I graduated from Animation mentor in 2011 but I’ve kept on taking workshops since then. Now I work as a Senior Animator at Ninja Theory
Peter: I’m Peter Whiting and I’m from the South Coast of England. I studied Computer Animation at the University of Teesside and have been in the games industry for about 10 years. I’m currently a member of the small but perfectly formed ‘Cinematics’ team at Ninja Theory.
Tim: My name is Tim Verhaert and I live in Germany. I am a Motion Designer and I studied at the University of Applied Science in Berlin.
Giacomo: I am a sound designer currently working at Ninja Theory in Cambridge. Originally from Italy, I moved to the UK in 2010 to continue my studies in audio production and sound design. After graduating from Edinburgh University and Vancouver Film School, I worked as a freelance sound editor in London for a year before joining Ninja Theory in 2015.
I think of sound design has having two souls: on one hand there is the practical need of attaching a sound to everything that you see or happens on screen – in technical terms this is called “diegetic sound”. On the other hand, sound can be used to set a mood to a scene so that the viewer’s emotion can be guided through the experience – “non diegetic sound”.
Assassins Creed Inspired Scene
Bao: Initially I started doing a likeness study of Michael Fassbender. At the time there was also the Assassin’s Creed movie coming out so my Idea was to make this assassin character, Michael Fassbender was playing, as a 3D version and release it as a free download for the community before the movie came out. Needless to say, it didn’t go as planned and not only did I fail to hit the deadline but also the whole project became something else. At first I wanted to have a nice turntable with a simple background and a nice pose. But when I saw Lenz was working on an environment back then I was asking him if we could merge our projects together. So from then, I wanted to showcase our work Lenz and I did in a more cinematic way.
I quickly realized that we needed more support so I started asking Sarah if she could help out on the VFX side and when the other guys joined I had this Idea of a cinematic, which could look like it would come straight from a possible game.
Giacomo: I got involved towards the end of the project once the video was in its final stages, even though I was aware the project had been going on for a while.
Because of the relatively simple scene portrayed, much of the sound had to come from the environment, supplemented with good foley for the character and some subtle non-diegetic sounds to accentuate the pacing of the editing. The final work on my part only took about 6 hours to reach the level of polish I was looking for.
Lenz: The environment started out as an exercise on photogrammetry. It served as a test case on how far I could get with simple capturing equipment and if it was possible to build a whole environment with my own scans.
I went to nearby forest locations to scan a variety of forest grounds, rocks, roots, etc. and processed them using reality capture and ZBrush.
It was an interesting challenge to develop a full pipeline, containing every step from capturing the photos to the game ready assets, with delit textures and all the maps necessary for a pbr engine.
Lenz: The ground material consists of 4 different layers.
A dirt ground, forest ground with fallen leaves, a more rocky forest ground and a layer for puddles.
The textures for these layers were all scanned and then processed to be tileable.
Photogrammetry was great in getting some nice transitions here, as you can bake a very accurate height map from your scan, which you then can use for blending.
Lenz: The vegetation was built manually, using 3ds Max. The photogrammetric workflow for the rest of the scene, helped me as I could shoot tons of reference photos of the foliage I would have to build later on.
This proved to be very useful, as I could already break down the type of assets I’d need to build later on while being at the scan location and having the perfect reference material directly in front of me.
Lenz: For the environment, I went for a fully dynamic lighting solution, using a directional light, a skylight and distance field ambient occlusion.
On the directional light there is a material function to simulate the effect of clouds hanging over
the scene, casting shadows on it.
From an artistic standpoint, it helped to support the mysterious mood and the tension of the scene.
Sarah: When I started to work on the scene I was imagining really heavy and thick smoke hovering above the grass, kind of the mist you would see floating above the field on an early morning.
To achieve this I worked in 3DS Max and used the plugin FumeFX for my simulation. My timeline was set from 0 to 200, so I had a good variety of frames I could choose from.
When I initially started my simulations they were quite turbulent but after a couple of iterations I decided to go for a very subtle movement. Everything else seemed to be to distracting and noisy after I tested it in the engine.
I ended up with two flipbooks in UE4 and added some panning details on top of my textures (by using the Particle MacroUV node). I used the RGB lighting approach to light my simulation in 3DS Max (each color represents another lighting angle). This way I can control each texture channel individually in the engine. For example, the blue channel represents the key light so I would make this color in my material brighter than the rest. This gave me great control while tweaking the look of my particles to match the environment.
To make sure that the particles blend well with the ground and nearby objects I used a depthfade node.
The fog sheet in the player area (when the camera faces the front of Aguilar De Nerha while he is looking over his shoulder, at 00:10s) is using a camera fade and has the option to fade out depending on the camera angle.
This way it will fade out naturally. In the background we are using some fill fog sheets which Lenz already created – I only added a panner node to the world aligned noise texture to add some subtle movement to them.
We also have some dust motes hovering in the air to sell the atmosphere. I am using GPU particles with a quite simple material setup. All I used was sphere mask multiplied with the noise node.
Bao: It was a hard challenge to fit the character in the already established lighting from Lenz. When we were working on the Cinematic I turned the lighting off for the character and started to build the lighting for each shot individually. The character is lit with dynamic lights.
Later on I asked Tim to help me out with the Post in after effects to add the black bars and adjust the lighting.
The Final Trailer
Peter: We used Matinee instead of Sequencer for the project. Sequencer has a tonne of new features and is a great tool for complex game cinematics, however the initial scope of the project was really really simple: create and render some dynamic turntable cameras. Based on this, the short time-frame, and my own familiarity with Matinee I felt that using Sequencer might not be advantageous.
Rik: The whole sequence is one long animated maya scene and imported as a single fbx file into unreal. That get’s played on a skelmesh through matinee.
When we discussed the character we decided we wanted subtle acting. It needed to feel natural and grounded so I used reference. I shot videos of myself in my room acting it out. As soon as I had a rough pass I gave it to Pete who enhanced it with camerawork. He framed it so we could see the best parts of my animation.
We iterated a few times, both the acting and the cameras until we got something we were happy with, and then I refined the animation to the camera.
Getting feedback was crucial for that final part, I showed it to as many co-workers as I could. I applied the changes and I showed it again, and again, and again.
There are still lots of mistakes and I would obviously like to fix those but at some point you just have to let it go.
Time Costs and Challenges
Bao: I started with the Character back in October 2016 and we finished the whole project in March.
I think the main challenges were time management and merging everything together. Since we all worked on this in our free time it was really challenging to keep going.
Rik: In terms of rigging, cloth and Animation I started in late January and finished in mid March. Bao wasn’t planning for this model to be animated so rigging and skinning was quite challenging because of how the mesh was constructed.
I tried to rig and setup the character myself but for that level of quality my skills weren’t enough, I kept bothering Jacob Feeley, our tech artist at Ninja with a pile of questions, he was really helpful but some things were just above my level.
I almost gave up on it and then Alex Raab joined us. It was around February, he fixed all my mistakes in rigging, skinning and Cloth and I was able to concentrate on finishing the animation.
Bao: Big thank you to everybody on this cool trailer we made. I cannot be happier than knowing that there are some really amazing and talented people out there willing to spent their free time and collaborate on projects like this. There will be more coming.