Working on The Last of Us Fan Art Environment

Working on The Last of Us Fan Art Environment

With the Last of Us Part II finally released, Anton Syrvachev prepared a timely breakdown of his fanart project made with Blender and scanned data.

Introduction

Hi, I'm Anton Syrvachev,  an Environment Artist from Saint Petersburg, Russia. I'm currently working at Battlestate Games on Escape from Tarkov.

I am a self-educated artist and learned everything from youtube videos and various tutorials available online. So far, I have worked on 2 major projects: Call of Duty: Modern Warfare (2019) and Escape from Tarkov. Also, Peter Gubin and I gave a talk about using Blender in gamedev, check it out here:

I got into 3D due to the desire to make my own games; at that time I had a dream of making a game on my own. In my mind, I wanted to make the whole game by myself: coding, modeling, animation, VFX, etc. But as I progressed with various small projects, I discovered that I was captivated by creating 3D environments.

When The Last of Us came out in 2013, it further cemented my wish to work in the video game industry. I got really inspired by Grounded: The Making of The Last of Us documentary. In that documentary, I saw the concept art by Aaron Limonick which I eventually based my latest environment on.

But at the start of my journey in art, I was really inexperienced so I started off by making small 3D scenes in 2015 with the intention of making big sprawling environments one day. Environments in games always take center stage for me and exploring them and seeing what imaginations of artists have created for me to discover brings much excitement. While looking at The Last of Us, I also really liked the way Naughty Dog approached the concept of environmental storytelling.

1 of 6

The Last of Us Fanart Project: Goals

I started the Last of Us scene while working on Call of Duty: Modern Warfare (2019) at Trace Studio and wanted to make this environment using workflows that I picked up during development: extensive usage of photogrammetry and custom vertex normals on beveled geometry. The goal of this scene was to channel the knowledge and skill I acquired into a single environment. As mentioned above, I based my work on the concept art by Aaron Limonick which was my main reference, as well as the game The Last of Us itself.

Modeling

The scene required a lot of modeling. I think I modeled around 50 assets for this one.

My workflow for architectural assets in Blender involves using procedural UV-mapping while geometry is not final and changes have to be made to bevels, etc. The bevels are procedurally applied by geometry angle or by marking edges with weight for bevelling with the Bevel modifier. All architecture was built using modular pieces in Blender, but I exported buildings as single meshes to Unreal. My most used addons for Blender are Texel Density Checker and Machin3tools; both are free.

The most complex part of modeling was working with scans of vehicles. My camera phone was not a great choice for scanning hard-surface assets, but that was all I had at the time. The color information of the car the phone was able to capture was ok but geometry-wise, the scan was very rough. In the end, I modeled a high-poly version of the truck below and baked it in two passes: one for color data from the scan and the other for the normal map from the high-poly mesh I modeled. However, I did bake a normal map from the scan as well and projected it onto my high-poly normal map in places where it would complement the texture. The same workflow was used for other scanned assets that had messy geometry. I think this approach takes the best of two worlds if you have a limited budget to spend on the camera: you get the cleanliness of hard-surface modeling with imperfections of the scan.

Working with Scan Data

I collected the initial scan data using my phone's camera for most of the scans; later, I got a Sony a6000 camera and it obviously gave much better results. No tripods, filters, or crazy setups were used - a stock lens was enough for my purposes. The scans were processed with RealityCapture.

The Clicker model was the most fun to make. Mt colleague from Trace Studio Ilya Shichkin took photos of me while I had to remain absolutely still, which is impossible, of course, unless you have a camera array. 

After Ilya processed the photos in Adobe Lightroom, I took them to RealityCapture, sculpted over the scan in ZBrush, made a low-poly model, and painted it in Substance Painter. The whole process took 2 days.

Fun fact: if you wear black clothing, it won’t scan properly. So I had to texture my pants in real-life by dirtying them up with chalk.

Foliage

The foliage was a bit tricky to get right.

The lighting is fully dynamic in the scene, so I had to use mesh distance field shadows on the Ivy to prevent it from looking flat. Without it, the Ivy looked very flat and almost with no shadows/AO when placed far enough from the camera. The way I set up the Ivy is the following: I sculpted several Ivy leaves in ZBrush and baked them into one texture atlas. Next, I created Ivy modules in Blender and arranged them there instead of doing that in Unreal. This is not an optimal solution, and I do not recommend it, but it worked in my case. After the modules were set up in Blender, I used random selection function and tweaked selected leaves so that the repeatability of modules would not be visible. I then made another pass on the Ivy geometry and tweaked it a bit further to get a more natural look. Overall, I do think it would have been better to just place modules of Ivy in Unreal. Certainly easier.

For the grass, I just took several Megascans grass textures and arranged them into a single grass atlas. I did not bake the grass because the grass from Megascans already had a normal map but I did lightly touch up the grass textures in Photoshop. I then mapped the grass onto flat/slightly curved polygons and arranged them in SpeedTree into grass 'islands' that I could scatter in Unreal.

Some 3D vegetation assets from Megascans were used as is.

One interesting detail: I added emissive to the material of the red flowers so that the red color would pop more against the green background.

Texturing

For texturing, I used a mix of photo textures, scanned surfaces, and Megascans.

To get materials, I scanned the whole sections of walls from which I could make tileable materials and material variations.

Ground and wall textures are set up as materials that could be mixed together in Unreal Engine via vertex color painting, so nothing groundbreaking here.

I always use material instances in Unreal for real-time tweaking of material parameters like normal map intensity, roughness, etc.

Scene Assembly & Lighting

I started off by placing assets in the scene within Blender but soon switched to Unreal, as the initial approach did not offer much flexibility when moving stuff around. Once assets were placed in the scene, I would tweak their materials using material instances or simply touching up their textures in Substance Painter.

Before landing on the final version of composition and lighting, there was a lot of trial and error. In the end, I stuck closer to the initial concept in terms of composition and created much more contrast between shadow and light. There were not too many light sources in the scene; the directional sun light does most of the heavy lifting to sell the scene. Some point lights were placed to give soft highlights in the areas of interest, along with rect lights to fill bigger areas with light.

In terms of post-processing, I simply tweaked shadow gamma and contrast in Post Process Volume.

I have to say that my friend Misha Kovyatkin helped me immensely at the lighting stage, along with Andrew Gubin. Without their help and input, I would not be able to make the scene into what it is today.

Self-Reflection & Advice

The most challenging aspects were to get the lighting and composition right and achieve an interesting voluminous look for the Ivy since it plays such a dominant role in the scene. 

On the whole, the project must have taken me roughly 500-600 hours to finish. Some aspects of the scene took a very long time to get right. For instance, I re-made the Ivy 3 times until I landed on the version I liked, and I am still not too happy with it.

Speaking of the possible improvements, I would like to make the scene more pleasing to the eye on the whole, not just from certain angles, and also add more detail to areas that are rough around the edges, so that I would not need to hide them from the viewer.

My main advice when tackling big scenes is to get your blockout first. It absolutely must be rock-solid before you move on to the detailing pass. Put your blockout into a game engine as soon as you can and set up lighting and composition there instead of doing these tasks in your favorite 3D package. In addition to that, know your priorities and don't do micro-detailing if your scene does not work on a macro level.

Anton Syrvachev, Environment Artist at Battlestate Games

Interview conducted by Arti Sergeev

Keep reading

You may find this article interesting

Join discussion

Comments 0

    You might also like

    We need your consent

    We use cookies on this website to make your browsing experience better. By using the site you agree to our use of cookies.Learn more