Hi, I’m Dmitry Kremiansky, I’m a Senior Environment Artist at Axis Studios, where I've been working for the past three years. I worked on about 20 projects with them so far (trailers, cutscenes, teasers, etc.), including TESO: Summerset, Outriders trailer, Deathloop trailer, MTG War of the Spark, Days Gone and many others.
You can see most of them in my portfolio.
A couple of months ago, I was approached by the amazing NVIDIA team. They were looking for a scene to use it in real-time and showcase their groundbreaking RTX technology. I suggested creating a new scene in Unreal Engine that would work in 360 degrees rather than reworking an old one, converting textures to PBR and making it real-time. As I had to learn Unreal Engine in a relatively tight deadline whilst working full time, I had two choices: either purchasing lots of premade stuff and creating a scene out of it or just trying to make something I could capture textures and scans of within a couple of hours. That's when I chose to make something based on Edinburgh.
There was no concept but I had kind of an idea of what needed to be done, so I played with some primitive shapes until I had an okay composition. Then, I just started modeling and texturing the first asset to test the approach. My goal was mainly to get familiar with shading and certain workflows in Unreal Engine that can be included in my pipeline, say, quick lighting tests, 3D concepting, or layout.
When I started the scene itself, I first made the blockout to see the scope. I also tried texturing relatively early as it was important for me to have the windows and cornerstones aligned with the bricks, so I did a few back-and-forth iterations on that. I reused as many elements as I could to make the scene optimized.
The buildings in Scotland have surprisingly big windows and tall floors (instead of, say, 3 meters in height, they are usually 4 or more) which means the scene happened to be quite large. Most things were set up in Maya and I was just updating the assets in Unreal after each iteration.
I usually rely heavily on fill layers. With photo-based textures/scans, Dirt Generator is a good tool to use, but it looks even better when the result is being multiplied by a fill layer based on an image so that the dirt is not uniform. Metal edge wear is also rather uniform, so it requires some attention as well - Slope Blur is a great filter for this purpose when used right. The secret is to have a few layers without overdoing it. I’d say, some of my materials have just 3-4 layers. In my opinion, it's all about making each layer count and serve a purpose instead of having 30 layers for one material that makes your whole scene bulky.
I have HDRI lighting in the scene, plus I added a light inside almost every light source. I also added a fog into the scene, although it’s very basic in some places. Most of the magic is happening in post-processing volume where I’m grading the image quite a lot. I find that I’m getting very different results from the same HDRI, say, in V-Ray and Unreal, but that might be just because of the way I’m using it.
Detail Lighting/Lighting without Post-Processing:
I feel I've learned a lot about optimizing UVs, especially for intersecting and not visible stuff. It is important to scale those parts down so that they don’t reduce the texel density on the visible elements.