Power Overload: Creating a Sci-Fi Environment in UE4

Alberto Catalan Gallac prepared an extensive step-by-step breakdown of his latest 3D scene Power Overload. Software used: UE4, 3ds Max, ZBrush, Substance Painter, and Marmoset Toolbag.

Introduction

Hello everyone! My name is Alberto Catalan Gallach, I’m 23 and I’m currently studying a Master's in Animation & Digital Arts for AAA Video Games at Florida Replay in Valencia, Spain. My aim is to create solid Environments and Prop Art for the video game industry, that’s why I’m always looking for new workflows and tools that help me to improve my skills as an artist and accelerate my work process so that one day I can join a team of artists and be able to grow artistically even more.

Background

After finishing high school, I took a two-year grad in Video and Audio Production, after which I enrolled in a 3D Graphic Design grad certificate where I ended up focusing on video game art at the same school where I am now.

For me, the art world for video games is not just a future professional career, but also a passion, so I spend many hours a day looking for new tips on Polycount threads, analyzing breakdowns on platforms like 80lv, looking for inspiration in many new artists’ work, or even analyzing many of the videos released by software creators on how to correctly use their tools.

1 of 5

Power Overload: Goals & Reference 

My main objective in making the Power Overload project was to create a sci-fi corridor to carry out the entire process of creating an environment from start to finish. Among my intermediate goals were: gaining a deep understanding of modular design, increasing my knowledge about the bake of high-poly to low-poly, hard-surface modeling techniques, texturing in Substance Painter, environment creation in Unreal Engine (lighting, material creation, and blueprints), and also learning more about optimization of textures for the engine.

I used a large number of references, both for the general style of the runway and for the individual elements of the scene, even for the lighting, - but I always kept in mind that the final result must be creative and original. Possibly, my main inspiration was the work of Sergey Tyapkin.

I’ve also been soaking up game environments like Doom, Star Citizen or Alien Isolation, and many others, spectacular games in terms of art level and environments. I have thoughtfully examined the modeling style, the composition of the scenes, the color palettes, etc.

Modeling Workflow

BLOCKING

The beginning of modeling was essentially the planning process. I started deciding how many wall, floor, and ceiling modules were needed in order to achieve a complex result without creating too many modules. After doing blocking tests in 3ds Max I ended up deciding to make 2 floor modules, 2 ceiling modules, and 5 wall modules besides the door. To get the correct dimensions, I imported into my Max scene a human-base mesh with a height of 1.80 m.

After doing some dimension testing with boxes I started a very simple blocking to define only the silhouettes and measurements of each module. To make sure that all the modules fit together perfectly I decided that they all must have a 1-meter-norm, that is: every module dimension is a multiple of 1 meter (3.3ft).

When planning the stage, from the beginning I decided to model two luminaires, in order to achieve real lighting points. 

Having solved the main modules, I decided to create a cable duct to cover the existing gap between the walls and the floor, since I decided to place the floor slightly elevated compared to the walls. In addition, a modular set of pipes provide more complexity to the upper corridor area and therefore make it more attractive.

The boxes that are scattered all around the hall were the last items I decided to make.

1 of 2

HIGH POLY

In the high-poly model creation phase, I worked with 3ds Max, adding to each model every detail before creating the subdivision to get smoothed edges that will work well on the normal map when performing the bake.

It is usually quite useful when working in 3ds Max to create the models adding a new Edit Poly modifier every time we are going to make an important change in the creation process, accumulating the Edit Poly modifiers in the modifier stack. This way if we need to go back in the process we will simply have to eliminate the unwanted Edit Poly modifiers and we will work in a non-destructive way.

When modeling our high poly, we must consider that all the details that are going to be baked in the normal map must have a bevel although in reality, they do not have this shape. Otherwise, when the normal map bakes perpendicular to the surface, it will not generate pixels in our normal map and we will not be able to capture that sensation of displacement in the normal map correctly.

To practice two different workflows, I decided to do the subdivision in two ways: first, by traditional subdivision adding support loops and Turbosmooth, and second, by working with the subdivision in ZBrush. Let me explain with some examples each of the workflows:

- Traditional subdivision in 3ds Max

To create the high poly models in 3ds Max, firstly I created a model with all the desired detail and then I added the support loops manually in a new Edit Poly modifier, with the quad chamfer modifier or with the Turbosmooth modifier with the Smoothing Groups setting activated. I also previously set the smoothing groups of the model in the desired way (for the not-very-complex parts, since it is more difficult to control the hardness of the edges this way) and finally added the Turbosmooth modifier that I usually configure to 3 iterations.

I decided to model many of the details that are going to be stored in the normal map and go on flat surfaces in floating geometry to create a base mesh as simple as possible to facilitate the task of subdivision.

The advantage of this workflow is that we get a high poly model with clean topology and that by eliminating later the Turbosmooth modifier and the Edit Poly modifier that contains the support loops we get a very good base to start working on our low poly model.

In order to obtain the low poly model from this base, I removed all the geometry of the model that did not generate a silhouette; all the details that do not contribute to the silhouette will be drawn in the normal map and will work correctly. 

- Subdivision in ZBrush

For this workflow, a model is created in 3ds Max with all the desired detail. I set the Smoothing Groups of the model in the desired way and added the Unwrap UVW modifier. Within the UV Editor, I selected all the UV islands and unwrapped them with the Unwrap function by Smoothing Groups regardless of whether the islands were overlapped or poorly unwrapped since I only did it so that later I could create the polygroups in ZBrush. 

The 3ds Max model with all the desired detail is imported into ZBrush with the FBX Exporter / Importer tool, and in the "Polygroups" section we select "Auto Group With Uv", thus creating the polygroups from the UV islands unwrapped in 3ds Max. After creating the polygroups, it is time to make a Dynamesh. 

With the desired geometry in our model we proceed to smooth the curved areas, for this, we use the Polish By Features option until no facets are noticed. 

The last step to create our smoothed edges is to press MaskByFeature with “Groups” activated in the Masking section. This way a mask is created in the polygroup changes. We grow this mask with GrowMask until it reaches the desired size. We can also use SharpenMask and BlurMask to give it the desired hardness. 

When the mask covers the desired area, press Ctrl+Alt+Left Click in any empty area of ​​the viewport to invert the mask and, after that, use the Polish slider in the deformation section until you achieve the desired softness on the edges.

Like in the subdivision workflow with 3ds Max, we can use the model that we have brought to ZBrush as a starting point to clean up the unnecessary geometry.

I created the gas bags of the wall module with Marvelous Designer making a base model in 3ds Max to be able to carry out the simulation using it.

UVs Unwrapping

To unwrap the UVs with the 3ds Max editor I usually work as follows: I set the Smoothing Groups of the model correctly, always keeping in mind that if there is an angle change greater than 45 degrees there must be a change of Smoothing Groups.

Each change of Smoothing Groups implies a cut in the UVs, that is, a different UV island, although additional cuts can be made to relax the UV islands that don’t imply a change of Smoothing Group.

To correctly unwrap the UV islands, initially, I used the option "Flatten by Smoothing Groups" selecting all the islands, followed by the option "Quick Peel". If we have set the Smoothing Groups correctly, this will give us a very good basis to start with our unwrapping since the islands will be separated correctly. If any island has too much tension that can generate deformations in our UVs we will introduce additional cuts. If any islands are not unwrapped correctly with Quick Peel I usually use the Pelt Map option to get somewhat more precise control over the unwrap.

When all our islands are correctly unwrapped, it is time to put all the islands as straight as possible. For this, I usually start by selecting an edge that I want to position completely horizontally and pressing the "Align To Edge" option. Then I select the rest of the edges that are almost horizontal and press "Align Horizontally in Place", and "Align Vertically in Place" for the rest of the edges that are practically vertical to align them perfectly.

Putting all possible edges completely straight will prevent us from obtaining the jagged effect that is generated in our textures when there are inclined edges, and which is usually especially annoying in the Normal Map.

With all the islands unwrapped and as straight as possible it is time to pack them. For this my method in 3ds Max's UV editor is to select all the UV islands and select the option “Rescale Elements”, - this will convert all our UV islands to the correct size with respect to the others, that is, a pixel density in the texture will be the same for all the islands.

Later I use the “Pack Normalize” option, varying the Padding option until I get the correct Padding.

This way, I get automatically packed UVs that serve as a base to manually relocate the islands I consider so as not to leave empty spaces in the UVs. If there is an empty space that is not possible to fill with any other element of the scene I usually choose to increase a little bit the size of those UV islands that will be more visible in the scene, just as I usually decrease the size of UV islands that are practically not going to be seen if I need more space on the UV map.

Since it was classwork to practice the unique bakes, I was required to do the corridor without trim sheets or custom normals.

I decided to use a 2048x2048 map for every 200 square cm of texture, resulting in a 2048 px / 200 cm Texel Density = 10.24px / cm. To work the Texel density in 3ds Max I usually use the texel tools of the TextTools plugin, I simply create a plane with the desired texel density, copy the texel density, and paste it into another mesh. This will scale our mesh UV the right size with that texel density.

Baking Phase 

To save time when importing into Marmoset, it is very important to use good nomenclature. I usually use the Marmoset Quick Loader, with this tool I simply have to choose a name for each part of the low poly model, followed by the suffix “_low”, and call all the parts that are part of the high poly of that model with the same name, followed by the suffix “_high”, and after that an optional suffix “ _part1” that indicates what part of the high model it is.

E.g.: object_low, object_high, object_high_variation1, object_high_variation2

This way we can export from 3ds Max all the models that we are going to bake in the same texture map and, provided we use this nomenclature when importing it, Marmoset Quick Loader will automatically group all the bake groups in the scene correctly.

Another advantage of Marmoset is the possibility of being able to change the bake cage easily with the “max offset” slider. We can also decide in which direction the object's normals will bake with the paint skew function, something very useful since all the details of the normal map on flat surfaces should be painted in a completely black value in order to force the bake direction of the normals to a completely perpendicular to the surface position, otherwise, unwanted deformations in our bake may appear.

Trying to save as much time as possible, my procedure is to initially bake only the normal map with low-quality parameters and a very low resolution. Then I adjust the cage and the skew perfectly, and I fine-tune the parameters using a resolution usually of double the target resolution that I’m going to use in the engine:

Let's talk about the maps I bake in Marmoset:

Normal map: In my case, I flip the “Y” channel because my maps destination is Unreal Engine that by default works with DirectX. Conversely, for Unity, we should not flip this channel because by default it works with OpenGL.

Ambient Occlusion: When baking the Ambient Occlusion, I wanted to generate only the occlusion of each object on itself (the general occlusion will be added to the motor), so it is important to deactivate the parameters of “Floor occlusion” and “ignore groups” in the configuration of this map.

Thickness: This map is necessary when texturing in Substance Painter, it is used by many of the generators that I will use later.

Material ID: This map is very useful when creating masks and texturing different materials in parts that only exist in our high poly but in low poly are to be flat and will be drawn on the normal map.

In 3ds Max, a different material is added to each material that we want to create later in Substance Painter (one material for steel, one for orange paint, one for white paint…) before exporting our models for bake with the option “embed media” activated in our export settings.

If we don’t want to lose the color that we have assigned in 3ds Max, we can obtain this map by baking it like the Albedo map in the Marmoset Toolbag. If we simply bake Material ID, Marmoset will assign random colors.

Curvature: This map is very useful when we are going to make the textures to mask the edges of our objects. The result of the Marmoset map is usually a fairly thin border, so if we want a mask with thicker edges we can also bake the map that will give us this result in Substance Painter. I usually bake this map in both software packages to choose which one of the two curvature maps gives me a better result depending on what type of mask I want when texturing in Substance Painter.

Height: This map is only used in case we want to use tessellation or parallax occlusion inside our engine. I used parallax occlusion on a couple of elements.

World Space Normal & Position: These two maps are used internally in Substance Painter when using generators or smart masks and I bake them in Substance Painter without the need to add a high poly.

Texturing Phase

To start texturing, it was very important to state from the beginning which color palette I was going to use. So, I did several tests adding base colors from different palettes to some of my modules and seeing which one worked best. In the end, I opted for this palette.

All along, it is a good idea to take the model to the engine, - in this case Unreal, - to see our textures as we advance on the target platform since there are always subtle differences in the way each software interprets textures.

Next, I will explain step by step how I worked on one of my textures:

Base material definition: This part, although it may not seem so, is one of the most important of the process. A good definition of the base material will give us the desired credibility. In this base material, I usually adjust color, roughness, and metallic behavior. Then I work on it. 

When generating the wear of our materials it is important to be clear about what has happened to them, that is, wear has to tell a story. I decided that my spaceship had previously stopped at some earth-like wild planet, therefore there are dust and sand traces left by the passengers walking down the corridor. It is also important to analyze how materials work in reality. For this, it is very useful to look for real references, and although the hall is a sci-fi ship, we can always find materials and objects that would have similar wear in reality. I chose to look for references in industrial machinery to see how surfaces wear out and how dirt is distributed.

We begin to build the wear of our material:

Scratches and peeling on painted surfaces: To achieve this level of deterioration, this time I created the metallic material that would be under the paint and masked it with the Metal Edge Wear generator. To have more manual control, I finally added a paint layer in subtract mode to paint with white values ​​with an appropriate brush, like dirt brushes, to erase by hand those metal areas that do not convince me.

Welding: To create the welding effect, I created a fill layer of the base material, with the desired base color, minimally increasing the height parameter to give it some information on the normal map, with fairly high roughness. I set the metallic value at 1 since it is a metal. In the areas where I wanted to simulate a weld under the paint and not directly on the metal, I simply used a layer with a higher height without changing any other parameter. To create the weld, I used the Substance Painter welding tool, which with some care and a correct base material can provide very good results. 

Oxide: To create the rust effect, I used a base material with the desired color base and roughness that is also quite high. This time the metallic value is set to 0 since the metal when oxidized loses its characteristic metallic reflectance and making it metallic would be a misinterpretation of the PBR. To create the mask, I have used the curvature map generated in Substance Painter with a grunge map in linear dodge mode (add) to add oxide in areas that the curvature map did not mask. Then I used one more pair of grunge maps in subtract mode until I got approximately the desired amount. Finally, as usual, I added a layer of paint in subtract mode to be able to eliminate the oxide in the areas that do not interest me. 

Dust: This time I used the dust mainly to create variation and contrast on the roughness map. I decided to make two layers, one softer general dust and one with more dust accumulation in specific areas such as joints and recesses where it would mostly accumulate.

In the softer dust layer, I used a fairly dark color on the color base so that it was not very visible, and a medium-high value roughness, plus I moved the metallic value to 0. To generate the mask, I used a dirt map from the grunges section, I adjusted it by adding some levels to contrast it a little more and finally one of the bnw spots maps in multiply mode to break the homogeneity of the mask a bit.

In the specific dust layer, I used a lighter colored base than the previous one to make it more visible, a very high roughness value, and also brought the metallic to 0.

To create the mask, I started with an imperfections map downloaded from Quixel where you can find very good maps to use as the basis for masks. On it, I added some levels to contrast it and on top of this, the mask editor generator adjusting some parameters and using the multiply mode so that only white values ​​remain in the areas where there would be ambient occlusion (which is where dirt would accumulate mostly). Over this result, I added a paint in normal mode to be able to add or remove dust by hand where needed with a dirt brush.

Grease: Adding grease to my materials was very useful, I was able to add quite low roughness values ​​and get the most out of the possible range on the roughness map. I adjusted an almost black Base Color, a roughness quite close to pure black but without reaching it, and made the material non-metallic. On this occasion, I only wanted it to be in very specific areas. I used a paint layer inside the black mask to manually make this grease visible in the areas that interested me, and adding later a very soft blur to soften the mask.

After my textures are finished I always add a small sharpen to my Base Color either in Substance Painter or Photoshop, this way I get the false feeling of having textures with a little bit more of resolution.

It is also usually a good habit to add on top of all the layers that we have created the material called PBR Validate. This will help us to check if the textures are using correct values ​​for the PBR, showing us in red the areas that are not correct.

To optimize time, I created smart materials from the paint and metal materials. This way, it was only necessary to change the base color of the paint and repaint the layers I had painted previously by hand in each model.

PREPARATION OF TEXTURES FOR USE IN UNREAL ENGINE 

An excellent way to reduce the engine cost of the textures is to mix them to reduce their number to a great extent. We can mix textures that work in grayscale (Roughness, Metallic, Ambient Occlusion, Height, Emissive, Opacity...) by placing each of these textures in one of the Red, Green, Blue channels, or even the Alpha channel if necessary. I created my textures for this project in such a way that the roughness map goes into the red channel, the metallic goes into the green channel, the AO goes into the blue channel (RMA) and I took advantage of the alpha channel when necessary to add the height map for the parallax occlusion or the emissive map in the materials that needed it.

This process can be done by creating an export profile in Substance Painter or manually mixing them in Photoshop. Here is how to do it in Substance Painter:

It is useful to name the normal map with the suffix “_NRM”, this way when importing it to Unreal it will be recognized as a normal map and all the necessary adjustments for this type of map will be applied to this texture.

Importing Assets to Unreal Engine 

When importing meshes into Unreal, it is important to add the prefix "SM_" so that Unreal recognizes our meshes as static and configures them appropriately. I have used the following configuration:

In addition, I generated some very simple collision meshes in 3ds Max for each module, which must be convex. With the appropriate nomenclature: "UCX_ [RenderMeshName]_##" and the import collision mesh parameter activated, it is added to our model automatically when importing it. So, we will not have to generate it within Unreal and the number of triangles will also be more optimized. It is also important to consider from the beginning of the modeling the pivot point of our objects and modules, it must be in a logical place to comfortably place it in Unreal. In the case of a module, we can snap it to a lower corner from even the blocking phase, for example. This way, we can re-export our models and if our pivot point is still in the right place it will refresh itself om the scene without losing its position.

Importing Textures to Unreal Engine

When importing the textures, I just had to worry about changing the compression settings to “masks (not sRGB)” in the RMA map so that each channel would be treated as a separate map. For all the textures I imported without any map in the alpha I activated the parameter “compress without alpha”. When renaming the normal maps with the suffix "_NRM" the settings are applied correctly when importing the texture and we should not change any parameter.

Hall Assembly in the Engine

To achieve perfection when placing each module by hand, it is necessary that module dimensions fit exactly, as I explained in the blocking phase, and to activate the grid snap in Unreal with the value that interests us (in my case, I adjusted it to 50cm). Ideally, it is necessary to carry out the assembly from the start, that is, from the blocking, in order to simply re-import the models as we go forward and have a perception of how our stage will look from the beginning. In addition, it will allow us to make sure that we don’t have any problem with the size of the modules and avoid bad surprises afterward.

I made variations for some of the modules such as the wall with the hydraulic arms (I placed the arms in different positions) and the pillars (in some of them, I opened the door where the medkits are kept).

1 of 2

We can check the complexity of our shaders on stage with the shader complexity mode of the Unreal viewport, - it is a good method to check if we should optimize any of our elements. Objects with transparency and particle systems will always be the most expensive items for the engine.

Creating Useful Materials and Tools

CREATION OF THE MASTER MATERIAL

I chose to create a master material that would serve for most of my modules exposing some parameters to be able to adjust them in the instances of each material.

CREATION OF BLUEPRINT TO GENERATE CABLES 

When thinking about solving the wiring, I realized that making unique models for each and every one of the cables I would need was not feasible, so I decided to make a blueprint that would help me generate my wiring within Unreal. This way, I simply needed to import a cable segment with some geometry to be able to bend it and a few small textures since this blueprint allows us to deform the splines within Unreal in the desired way by making instances of the cable segment automatically. We simply have to drag the blueprint to our viewport and indicate which mesh we want to use and on which axis it is oriented.

Scene Lighting

To start lighting the scene, it is very important to consider the following steps:

We must create a Post Processing Volume that affects the entire scene and deactivate the auto exposure parameter so that it does not affect the scene and we can see our real lighting. It is also quite useful to initially turn off Screen Space Reflection and Screen Space Ambient Occlusion. We must disable the Vignette effect and the bloom. In order for static lights to affect our dynamic objects, such as doors that open, we must change the number of light points parameters in volumetric lighting detail cell size, - this way, a greater number of light points will result in more lighting detail.

We add a lightmass importance volume that covers the entire environment. To create reflections in our scene we must add a few sphere reflection captures scattered around the stage.

This way, we will be able to see our lighting without variations due to the post-processing effects that may vary our perception of the final result.

It is also important to adjust the lightmap size so that it is homogeneous throughout the scene. We can do it from within the properties of each mesh, increasing or decreasing the resolution of the map, and using the lightmap density viewport that will show us in red the parts with excess density, and in blue the parts with lack of density. Ideally, all lightmaps must be as green as possible.

The first points of light that I added were those that were emitted from logical points, that is to say from luminaires. In my case, I had some circular ones located in the central part of the corridor ceiling and another rectangular type on the side of the ceiling module.

The circular luminaires are spot lights, I gave them a slightly warmer light color, and the rectangular ones are rect lights with a colder light color, all static lights with the following configuration:

Later, I adjusted the emissive of the modules so that they affected the static lighting, except for the doors that weren’t going to be static (adjusting the emissive boost for each case) as follows:

Finally, I observed that the lower part of the lateral areas of the corridor was dark. This is not necessarily bad. There's no need to be afraid if our modules are not completely lighted in case it artistically produces the desired effect. But in my case, I decided to add a few rect lights pointing towards the modules to give more lighting to those parts and partially break up this penumbra.

Lastly, I decided to make some flickering lights to create a creepier atmosphere and incidentally give the scene more dynamism. I did this by creating a light function that would blink the lights and a material that would also blink the emissive of the luminaires' material in which I was going to place these flickering lights.

This is the light function in which I exposed some parameters of the blink speed to be able to create instances of the material and modify the speed so that the lights blink asynchronously.

This is the master material in which I added the emissive blink function where I also exposed the same parameters to have the same control as in the light function.

These are the settings that I used in the Lightmass for my final build of lights since, after performing a few tests, they were the ones that gave me the best results without endless waiting times.

I’m especially grateful to my classmate and friend Iñigo Cebollero Orella for his help in the lighting phase.

Using Real-Time Ray Tracing

To be able to use the Ray Tracing functions we have to change the defaultRHI to DirectX 12 in the platforms section of Project Settings and activate the Ray Tracing box in the Rendering section within project settings. This is the configuration that I have used for Ray Tracing in the Post Process Volume.

Below I'll show the comparison of the following results: Ray-Tracing Reflections vs. Screen Space Reflections, and Standard Ambient Occlusion vs. Ray Tracing Ambient Occlusion.

1 of 2

Exponential Height Fog and Particle Configuration

To make the scene more interesting I decided to also add a very subtle ExponentialHeightFog volume with a bluish Inscattering color.

The Exponential Height Fog is a very useful volume inside Unreal and, if we do not abuse it, can help us make our stage more interesting since it generates variation by interacting with the lights.

For our lights to affect the ExponentialHeightFog we must activate the Cast Volumetric Shadow box in the settings for each light.

I also added a few native Unreal particles of dust and smoke that come out between the floor and the walls to make the scene more dynamic.

Taking Screenshots from The Scene

To take screenshots, I decided to use the Nvidia Ansel Plugin with which you can get very high-resolution shots. Later,  they can be scaled to 4K resolution, which lets us avoid the jag we can get if we make the screenshot directly in 4K without Unreal crashing. In addition, this plugin allows us to create 360º images. To use this plugin, we must activate it in the plugins window, select play the scene in Standalone Game mode and press Alt+F2. This way, the Ansel window will open with all the options and we will be ready to use them.

Alberto Catalan Gallach, 3D Artist

Interview conducted by Arti Sergeev

Keep reading

You may find this article interesting

Join discussion

Comments 2

  • Gostyshev Egor

    One of the best articles on this topic on 80.lv
    Thanks!

    6

    Gostyshev Egor

    ·3 years ago·
  • Anonymous user

    Such a great breakdown. Nice work!

    0

    Anonymous user

    ·a year ago·

You might also like

We need your consent

We use cookies on this website to make your browsing experience better. By using the site you agree to our use of cookies.Learn more