Professional Services
Order outsourcing

Lost Temple: Creating a Scene with Blender and Megascans Assets

Andrii Stadnyk did a detailed breakdown of his Lost Temple environment, shared his Blender workflow and approach to tweaking Megascans assets and gave a few tips on how to work in Blender. 


Hi! My name is Andrii Stadnyk, and I'm a concept artist from Ukraine. Currently, I'm working as a senior environment concept artist at Room 8 Studio, where I create various types of concept art, from props and asset designs to concepts of locations, game levels, and mood exploration. 

My career as an artist in the game development industry began nine years ago when I got a 2D artist job. I started with UI assets for casual games and later decided to develop my passion for creating concept art further. Before switching to the game development industry, I gained some experience working as a graphic designer, and this experience helps me in my current work.

Final image.

About the Project

"Lost Temple" is the personal project where I experimented with a few things - Quixel Megascans, image projector technique, paint over for "painterly" look, and complex scatter in Blender. I had an idea of an artwork depicting some forgotten ancient temple, so it instantly came to my mind when I was exploring Megascans assets in Quixel Bridge. Having such a variety of Megascans assets available, I challenged myself to create an artwork using mostly 3D Megascans assets. As the end result, I wanted to get a "painterly" look for the artwork, so after adjusting everything in 3D, I moved to Photoshop.

I didn't gather reference images for this project on purpose, but I always collect compelling images that I find on the internet. Browsing these images collection could spark an idea for a new project or help with current. Also, artworks from favorite artists like Sparth, Eytan Zana, Florent Lebrun, and others are a great source of inspiration for me.

The main tools for this project were Blender, Quixel Bridge with Megascans, OctaneRender, and Photoshop.

Choosing Blender

My journey with Blender started a few years ago when I worked as a 2D artist, mostly with 2D assets, and decided to add a 3D workflow in my day-to-day tasks. Before that, I had some experience working with 3ds Max and ZBrush, but at that time, 3D art was more like a hobby for me. Blender wasn't popular back in the days, and I decided to try it just out of curiosity. It was a 2.79 version of Blender, and to be honest, my experience working with it wasn't great, so I put Blender aside for some time. My interest in Blender significantly increased with the release of a 2.80 version. It was a real game-changer since it brought an amazing real-time Eevee renderer, a UI overhaul, and a more streamlined workflow in general.

A scene created in Blender with Eevee.

Blender is developing by leaps and bounds these days, and sometimes it's hard to keep track of all the new features and improvements. I follow Blender.Today account on Twitter to keep up with all Blender news. Also, Pablo Dobarro on Twitter is worth following. He's developing amazing sculpt tools for Blender and shares insights into the development.

Speaking of new features, the upcoming Blender 2.83 release will bring new sculpting tools, such as the Cloth Brush, the Sharpen Mesh Filter, Face Sets, and lots more. The most long-awaited feature for me is the new Undo system, which works much faster than Undo in 2.82. Currently, in 2.82, Undo action in a large scene could take up to one minute or even more. 

Start of the Project

I started with creating a simple "blockout" in Blender to establish the scale of the scene. I created a simple plane, put a character from 3D Scan Store, and added Roman columns from Megascans. In the beginning, I created columns array with an Array modifier but later changed it to instances (Alt + D shortcut in Blender). Instanced geometry consumes less memory than geometry with an Array modifier. Columns size and their quantity went through a few iterations until I was satisfied with the scale of the temple. Then I imported large stones, an additional two columns, and castle steps to create a "portal," which would be the main focal point.

Camera Choice

It was crucial to set up the camera position before adding other assets because the camera position defines how many assets will be visible. I tried a few camera angles and lenses and came up with a top-down position with wide-angle lenses. This helps capture an entire location and the massive size of the temple.

Adding Megascans to Blender

Since the final outcome of this project is a 2D image painted over in Photoshop, I set export parameters in Quixel Bridge to 2K texture size. This texture size is fairly enough for an image rendered as a base for paint over. Also, 2K textures use a reasonable amount of GPU memory, so using this texture size allowed me to add more 3D assets to a scene, such as grass, small branches, and rocks. 

Megascans assets used in the scene.

Quixel Bridge comes with the Megascans add-on for Blender, it allows you to import assets with one click, but it works only with Eevee and Cycles materials. I used OctaneRender for this project, and there are two ways to get Octane-ready assets in Blender. You can import Megascans assets in Cycles and simply convert Cycles materials using built-in Octane's material converter. But for large amounts of assets, it could be tedious to do it manually, so I recommend getting MSLiveLink Octane add-on for Blender. It does the same job as official Megascans add-on but automatically creates required Octane shaders. It's also free and frequently updated.

Example of shader setup made by MSLiveLink Octane add-on.

Populating the scene

Scattered Megascans assets.

To populate the scene with plants and rocks, I used a Scatter add-on. I found some nice looking grass clumps and small rocks in the Megascans library and fed it into Scatter add-on. The tricky part here was setting up the density of the grass as it could freeze Blender completely. Scatter add-on has the option to set the percent of displayed particles in the viewport, and the camera clipping feature is helpful, too. But the main benefit of using this add-on is the heavy automation of particle system setup. It literally saves you minutes and hours of work, has ready-made scatter presets, and allows changing parameters quickly.

With the help of Scatter add-on, I set up three-particle systems - one type of grass (Scatter preset "Cluster Grass MS"), another kind of grass with the same preset, and small rocks with preset "Simple M."

Scatter panel with statistics and viewport options (Scatter add-on).

The density of all scattered assets is controlled by the texture, which I drew on the base plane in Weight Paint mode.


Blender has built-in Cycles renderer, and there are lots of great artworks rendered with Cycles. As for me, I found Cycles a little bit slower and less realistic compared to OctaneRender, so most of the time, I use Eevee and Octane Render in Blender.

Octane Render is commercial software and typically requires a paid license to work, but OTOY made a free version of Octane for Blender. The free tier has some limitations compared to the paid version, but I believe that these limitations are not critical for concept art creation.


I didn't make any substantial changes for Megascans assets textures. To quickly add moss to assets, I set up a node group "Moss," which I added to materials where needed. Inside this node is a simple shader setup, which mixes moss material from Megascans with another material using a triplanar texture node as a mask. It allows adding moss only on surfaces facing upward.

Moss material.

For the floor tiles, I used the 3D surface "Cracked Stone Floor" from Megascans. In the beginning, I also added water puddles in the center of the floor, but it didn't work, so I painted it over later.

Waterfall geometry and texture.

To make the "portal" more compelling, I added some sort of "magical waterfall" to show that something strange is happening over there. Basically, it's two pieces of simple geometry, a plane, and a sphere. They share the same material created with a texture from the LED Illumination pack from Photobash.org. The texture is used for Albedo, Opacity, and Emission inputs of Universal Material.

One side note about managing GPU memory usage. Windows 10 at the moment doesn't allow you to use all available GPU memory in CUDA applications (OctaneRender is one of them). So, even with 2K textures, I quickly ran out of available GPU memory, and the solution here was the use of Octane Out of Core. This feature will use part of RAM as part of GPU memory, so you can use more geometry and textures than GPU memory size allows you. The tradeoff here is the speed and stability. 


Lighting in this scene plays a huge role. It creates an atmosphere and helps to create a focal point. At the beginning of the project, I tried to light a scene with a built-in Octane Planetary Environment, and it didn't work for me. So I dived into Octane tutorials, and Julien Gauthier's Octane Tutorials helped me a lot. After experimenting with lighting, I decided to use an image projector technique (so-called Gobo lighting).

Image projector here is a simple polygon plane with assigned material. It casts the texture onto surfaces like a real-world projector. The scale of the plain controls the size and sharpness of the projected image.

The scene is lit with two image projectors. The main projector creates a spotlight for the central part of the image, and the second one creates ambient lighting.

Another half of the lighting setup here is fog. It instantly adds mood and atmosphere to your scene at the cost of increased render time. Basically, it's a scattering medium, plugged in specular material, which is assigned to a cube. One trick here is that the camera should be outside of the cube; otherwise, it won't work.


For paint over in Photoshop, I rendered an image in 4K resolution. Usually, I render a few renders passes to compose it later in Photoshop, but for this project, I used only one beauty pass.

Raw OctaneRender result (beauty pass).

Since I wasn't entirely happy with the colors of the rendered image, I played with color balance and vibrance filters. Also, I cropped the picture a little bit to enlarge the central part of the location.

Render result with applied color balance and vibrance.

For painting the image over, I used Richard Wright's brushes. There are two layers of overpainting:

The first layer is painted over with the "O GR Wet Rag Impasto" brush to remove "perfectness" from 3D render (perfectly straight lines, photorealistic textures, etc.) and to create painterly texture.

First paint over pass.

The second layer is on top of the first pass. It's a mixer brush pass with "X" and "X Scatter" brushes. It gives more variety to surfaces and makes them look more complex.

Second paint over pass.

After painting over with brushes, I fixed a few areas here and there (harsh light reflection from the water on the floor and top parts of columns), then I added highlight on the "waterfall" with a soft brush, and small fog patch in the ray of light.

The next step was post-processing, and here I used a so-called non-destructive approach to add or change the number of effects applied to layers. In the beginning, I converted all layers to one smart object and applied smart filters (Unsharp Mask, Camera Raw Filter, Lens Correction). This setup allowed me to remove or change smart filters at any time without affecting the original layer.

Smart filters setup for non-destructive post-processing.

  • Unsharp Mask adds sharpness
  • Camera Raw Filter  - Clarity and Texture increase "micro-contrast" in the image
  • Lens Correction adds chromatic aberrations

For the final touch, I added noise to introduce additional texture and some grain. In this case, I used a free film scan from Boris Sitnikov, Ilford Delta 400. These free scans available here (in Russian).

The noise layer has an overlay mode and 50% opacity.

The Biggest Challenges

The biggest challenge on this project for me was the technical side because of the number of used models and textures. I tried to keep everything real-time in Blender's viewport, so the Scatter add-on was a great finding for me. Also, Megascans Live Link for Octane add-on sped up the entire process of building the location and saved some time.

The entertainment industry in general and the concept art field, in particular, are highly competitive areas, so I think that any activity which helps you develop and hone your skills is essential. I would recommend always looking for new information, watching new tutorials, learning different workflows, pipelines, and everything that helps increase the quality of your art and make the process of creation easier. 

And finally, I want to share some Blender tips that I use every day.

Structure - I always put assets in collections to keep structure tidy and easy to read. The same principle applies to the names of materials. It makes it much easier and faster to navigate through the assets. In case you need to hand your file to another artist, there will be fewer questions about what is going on in the scene.

Leftovers - Using the "File > Clean Up > Purge All" command from time to time removes unused data (such as textures, meshes, lights, etc.) from your scene. "Purge All" could dramatically decrease the size of .blend files (in case you have a lot of leftovers in there). Alternatively, you can access this feature in the Outliner, in the "Orphan Data" tab.

Texture Packing - Before handing the file to another artist, it might be useful to check the box on "File > External Data > Automatically Pack Into .blend." This will force Blender to collect and pack all required textures into one working .blend file. 

Compression - In most cases, the size of the blend file could be significantly smaller with compression. Enable "Compress File" in Blender preferences (Save & Load tab) to automatically pack files on the fly (Blender uses GZip). Saving the file will take more time. 

Thank you for reading, and I hope you found some useful information in this article!

Andrii Stadnyk, Concept Artist

Interview conducted by Ellie Harisova

Join discussion

Comments 0

    You might also like

    We need your consent

    We use cookies on this website to make your browsing experience better. By using the site you agree to our use of cookies.Learn more