Shatterline: Creating a Game Trailer Using Mocap, Maya & Houdini

Playsense's Andrei Bogdanovich and A. Liustsiber have shared the working process behind the recent story trailer for Shatterline, discussed the motion capture and animation pipelines, and explained how the trailer's VFX were set up in Houdini.

Introduction Please introduce yourselves to our readers, what companies have you worked for? What projects have you contributed to?

Andrei Bogdanovich: My name is Andrei Bogdanovich. I work as a Computer Graphics Supervisor at Playsense. At the beginning of my career, I worked in small local studios. For some time, I was a freelancer until I was hired to work at Wargaming. I worked there for ten years. During this time, I participated in a large number of big and complex projects, and I probably wouldn't even be able to name them all. The list includes Wargaming Holiday Adventure, World of Tanks – What Real Men Choose, German Battleships. Cinematic Trailer, SABATON – Bismarck (Official Music Video), World of Tanks Blitz – Night Hunt, "Soul Hunter", Wot Blitz. The limited edition IS-3 Defender, "Master of Orion", "Dark fire hero", "Grime", "Gollum", and of course, the official story trailer for Shatterline.

A. Liustsiber: I work on the Playsense team as a Director, Script Writer, and Environment/Lighting Artist. I started my career with Frag Movie CS:1.5 and various game trailers, then I got a job in Wargaming more than ten years ago. At that point, there were eight people in the video department. I was lucky enough to participate in various projects such as show pilots, game trailers (the most prominent of which was World of Tanks 9.0), and a load of motion graphics, showreels, and breakdowns, meant to promote the company as among the best players of the industry, we even won an award on a CG event.

I have always been inspired by expressive visual stories. I knew ever since I was a child that I wanted to inspire viewers the same way – so can you imagine my excitement when I joined the CG team? I was also offered to write and direct the "Wargaming Holiday Adventure" and then there was the release trailer "Master of Orion 3". It's been eight years since I have been able to work on more than 20 videos of various levels of complexity. And today we are talking about Shatterline and other projects with Playsense.

The Shatterline Trailer How and when did you start working on the Shatterline trailer? What were your main tasks?

Andrei Bogdanovich: If I'm not mistaken, we started working on the trailer in 2020. About a year prior, we made a teaser for Frag Lab for the same game. It came out well, so they came back to us. My main tasks were estimating, planning, technical and visual quality control, technical support, and research of technical solutions and approaches.

A. Liustsiber: We had to create an engaging teaser and a trailer explaining the main events of the game. During the creation of the teaser, we worked with the original idea of Frag Lab: Shelgard's squadron hits a rough patch near the wrecked aircraft. We developed this idea into a screenplay with a couple of cinematographic scenes leading the viewer to the inevitable clash.

Hammer Creative proposed the concept for the trailer. Their concept was rich in powerful messages and stylish visual solutions, which we tried to level up.

We also needed to highlight the strong sides of the project. We dedicated special attention to the characters, their unique abilities and immunity, developed the witnesses a bit and their leader Alan Seer, and, of course, showed the world on the verge of destruction after the events of "Strafe". I guess, these were the main points that our team focused on the most.

The Motion Capture Setup Could you share some details on your mocap process? What rigs and suits did you use? How did you rehearse the scenes with the actors?

Andrei Bogdanovich: Allow me to digress here. The teaser we made for this game was shot entirely in our internal mocap studio. It was convenient, but it had certain limitations. Our studio was located in a relatively small space, about 5x8 meters with a height of 3.5 meters. In reality, the useful area of the set was even smaller, because we used the optic system OptiTrack, and the cameras were suspended on the perimeter of the room and couldn't capture the space in proximity to the walls. This didn't allow us to film dynamic scenes, long runs, and active interactions between the characters. This is why we availed of the service of our partner MocapOne for the trailer. They had one of the biggest studios in our region and also used the system OptiTrack.

One of the disadvantages that we fully experienced was the blockage of reflective markers. As this is an optical system, the cameras installed on the set are "filming" the light reflected by the markers hanging on the costumes, then the data about the position of the markers is interpreted as the movement of the "skeleton" of the mannequin. So, naturally, if something blocks the reflected light, the information about the position of the marker goes missing. This is why the scenes involving falling down, laying on the ground, leaning on the walls, and close interactions become problematic because part of the markers is overlapped and the system cannot reconstruct the motion correctly.

In case of "fast" overlapping the system can complete the animation curve, and the problem is solved automatically. In other cases, we had to elaborate the data manually with the help of animators. What we needed from the actors was a very precise execution of the task to diminish the post-elaboration of the data. This is why we had rehearsals before filming days.

A. Liustsiber: We did a lot in our internal studio, but our main challenge was the work on the set of MocapOne. We only had one day, so we did our best to prepare for it. We had two rehearsals with the actors and based on that, we could set temporary priorities regarding the level of complexity and importance of the scenes. I tried to distribute the workload to allow us to stay within budget, and we managed to make it exactly as scheduled. I would like to thank Actionschool for their brilliant preparation and our team for the help with adjusting their costumes.

Preparing the Animations How did you clean the captured data and implement it in the final project? What did you have to tweak manually?

Andrei Bogdanovich: We were shooting roughly two types of data. The clips for the main characters and the clips for the extras. We automatically cleaned up most of the selected extra's clips with the native software from OptiTrack – Motive. We fixed the jitter. In most cases, this was enough for the characters in the long shots.

Whereas some clips needed more complete processing. In that case, we used Maya or Motion Builder. Besides all of the above, we elaborated on the animation of the fingers for the main characters. We added a bit of the artist touch – a keyframe animation on top of the mocap to add expressiveness to the movements, make the poses more punchy and compositions more balanced. We added secondary animation for the main characters' gear. The process was iterative and lasted for about three to four months.

A. Liustsiber: I can only say a huge thanks to the team for their hard work and devotion. They did much more than I could imagine. It was so nice to notice all the attention to detail and how our virtual characters were coming to life thanks to those smallest details.

The VFX Workflow How did you approach destruction and different visual effects?

Andrei Bogdanovich: It's all pretty simple and straightforward. Our core software was Houdini. We assembled and rendered the scenes in it, as well as all the FX. Houdini allows the utmost flexibility in the design and implementation of the effects. We had several types of them.

Similar effects recurred in many shots, which were not "heroic". The ones, where there was no interaction with other objects or where this interaction could be ignored. For example, the muzzle flashes, rebound shatters, smoke from the fire, fill smoke, and atmospheric effects. We simulated several variants of the same effect and created it in HDA, adding a possibility for the artist to switch the FX variations, slowing them down, accelerating or turning them on and off, and so on. As a matter of fact, the artist received an instrument, allowing him or her to enrich the cut quickly with "standard" effects, which usually are the main amount of the effects in the project.

The "heroic" effects were the ones attracting the most attention. They were made for a certain shot and a camera. For example, the meteorite hitting the building, the appearance of a wall of crystals, the moment when a ray of a blaster hits the glassheads and their destruction, the explosion of the cells, and so on. As I already mentioned before, these effects are custom-made for each shot. These are complex effects that consist of multiple layers. For example, during the destruction sequence, we simulated not only the destruction itself but also the dust associated with it, such as small fragments, particles, sparks, etc. And all these elements interact with each other and with the surrounding objects. The big pieces fall on the ground and bump against it and trails of dust follow them, on the spot of the cracks other particles appear and fall on the ground, colliding with bigger pieces on their way. Skilled FX artists are making these effects.

The "procedural" effects are using "logic instead of simulation". It means that during their creation we don't use dynamic solving, but only math and logical dependencies. The effect of red crystal generation is a nice example of that in the project. The crystals' systems had a complex shape and structure. They consisted of hundreds of crystals with one big crystal in the middle and the smaller ones, various in shapes and sizes, were concentrically arranged around it. Such conglomerates had to be on the walls, ground, on any surfaces in general. It is clear that for positioning thousands of crystals manually you have to be either demented or very brave. That's why we developed an instrument that generated thousands of crystals of the necessary look and shape after simply delineating with a pencil the area where the crystals had to grow. There were multiple settings in the tool, which allowed us to randomize and work on the appearance of thousands of crystals at a time. Also, we implemented an option for crystals' growth animation. Another good example is the generation of the ash on the whole set at once – on hundreds of assets, with consideration of the displacement. There was also a possibility to add the influence of wind, and obstacles to the diffusion of the ash on the surface. For example, there would be no ash generated under a car because it covers the surface of the ground. On the leeward side of an object, there will be less ash, and so on.

So, during this project, we created a lot of useful tools, and we are still using some of those.

A. Liustsiber: Besides directing, I also created and assembled the streets of New York with battle scenes and took care of the lighting of a couple of shots. While solving these tasks I used standard Houdini techniques and our digital assets created with the help of our technical team. For example, the elaborated setup of the crystals was simply projected on the surface, which allowed me to quickly edit them according to the camera and find the necessary shape. Dynamic volumetrics were there already as assets with a basic appearance setup, all I had to do was find a place for them and work with the shader. As for the weapons, we had an asset attached to a machine gun, so we only had to set up the timing for shooting.

Conclusion How much time did it take to finish the project? What are the main challenges when working on such game trailers?

Andrei Bogdanovich: It took us six to seven months to finish the project. This project has been loaded with challenges. We had never worked with such a large quantity of characters, there were five main and 12 secondary characters.

We had never done so much mocap and animation. For each of the characters we recorded, cleaned up, and improved the mocap. We had never done photogrammetry of the characters. We had a casting for Alan Seer's part and set up the location. The equipment there was provided by a photo studio, I don't remember their name. They provided 30 photo cameras and tripods and set the synchronization. As for the rest, it was an experiment.

Also during the project, the pandemic broke out and we had to work remotely. This was one of the main challenges. It was difficult to organize creative work with so many interactions, communication, and confrontations, as well as the infrastructure allowing this work. 

A. Liustsiber: The most complex task for me was organizing and carrying out the shooting, the schedule was really tough. From the point of view of rendering the scene in New York was so overloaded with objects, that the viewport couldn't show even half of the things shattered there, and this scene had to be rendered nicely. The most difficult part of these tasks was the scene optimization and regardless of all the optimization, maintaining the aesthetics.

Andrey Bogdanovich, Computer Graphics Supervisor at Playsense

A. Liustsiber, CG Generalist at Playsense

Interview conducted by Arti Burton

Join discussion

Comments 0

    You might also like

    We need your consent

    We use cookies on this website to make your browsing experience better. By using the site you agree to our use of cookies.Learn more