I am very impressed! It is easy to see you are on your way to a well-deserved wonderful future.
An absolutely great read, thank you for this. Really lays a foundation on how to go about the learning process.
Hi Matthew and Mr VFX, I’m currently researching this topic of decomposing images into shading and reflectance layers. I would love to learn about what you are trying to use this for to learn more about applications for this technology.
Fredrik Maribo prepared an incredible breakdown of his Sci-Fi Environment made with Unreal Engine 4 and shared useful sources. Among other software solutions he used Maya, ZBrush, Marvelous Designer, World Machine, Substance Painter, Photoshop.
Hi, my name is Fredrik Maribo and I recently finished my final year of a bachelor’s in computer games art at Teesside University, UK. Since my aim is to become an environment artist in the games industry, I decided to create a real-time environment for my final project. The software I used throughout the project included Maya, ZBrush, Marvelous Designer, World Machine, Substance Painter, Photoshop and Unreal Engine 4.
Aims of the Project
The technical aim of this project was to create a semi-realistic game environment using up to date industry practices, modular workflows, smart materials and detail maps. The visual inspiration derived from games like Wolfenstein II, Star Citizen, Doom and Call of Duty: Infinite Warfare.
My personal aim was to create a game environment that communicated a world rather than just a scene. I wanted my visuals to provoke curiosity for the rest of that world, maybe even help viewers spark some fantasies of their own to fill out the blanks. This approach eventually got me scoping pretty big and it became a quite large environment.
The reason behind my sci-fi interior choice was from my lack of experience in hard surface modeling. I also wanted to challenge myself with a more complex composition using fixed architectural metrics. This is why I eventually chose the circular nature of this environment.
With the sci-fi theme in place, it was time to find a location and develop a solid visual direction. To give me a decent update on the latest visual trends, I created Pinterest boards to collect images from current gen sci-fi games like Wolfenstein II: The New Colossus and Star Citizen.
The seemingly limitless access of reference can easily become overwhelming. I soon found myself devouring images to help grow my visual library and knowledge about everything surrounding the theme. I had collected 622 images sorted in 17 sections. These sections included cables & hoses, robots, and terminals, panels, pipes, fusion reactors etc. Studying these images not only expanded my visual library, but it also helped me understand how detail’s roles and functionality affects their visuals and how I could exploit it for my own designs.
Due to its popularity amongst environment artists, I decided to use Marvelous Designer for creating insulation, tarpaulin, and other cloth assets for the scene. To make sure my learning curve would be as steep and straightforward as possible I decided to invest in a course called ‘An introduction to Marvelous Designer’ (Schaika, 2017). The 22-video class goes through all the fundamentals and best practices of creating high-quality cloth simulations, including how to optimize them for games. To my surprise, the program resembled real life tailoring in a very practical fashion which allowed me to easily apply some of my own tailoring experience in the cloth simulation and pattern cuttings.
For the hard surface part, I decided to investigate the new ZBrush 4R8 update and its live Boolean features. This would change my regular low to high poly workflow to a reverse, high to low. My goal would be a faster hard surface asset production while maintaining high-quality bakes. A tutorial series by Michael Pavlovich called ‘Sci-fi Weapon Process’ (Pavlovich, 2017) caught my attention as it highlights 4R8’s new features under the lens of hard surface modeling. The course introduced tips and tricks that fundamentally changed my workflow.
At the blockout stage, all I cared about was primary shapes and overall compositions. speaking my mind with shapes as fast as possible. This had to happen rapidly much like sketching early 2D concept art. When spending 12 weeks on an environment I want to make sure my foundation can speak for itself from the very beginning.
The top left image above shows a blockout of a cryochamber concept. The top right image is from an airship design and the one on the bottom left is a room mimicking traits from fusion reactors. These are 3 out of 7 concepts I made before picking the power chamber issued on the bottom right.
The power chamber idea had an interesting centerpiece, room for open panels with exposed insulation and tarps, it had enough verticality to play around with hanging cable clusters and messy pipe installations. I could even knock down a wall and open it up for an exterior vista if time allowed.
Below is a collection of images showcasing the early evolution of my greybox blockout, the last image was my playful attempt to visualize an avalanche of snow invading the scene, abandoned the approach.
I later blocked out two new concepts for the centerpiece. I had sketched these designs on paper, so the next step would be to test the ideas in 3d space to make them more approachable for feedback. Below you can see that the concept on the right also has a paintover. Working fluidly between modeling and drawing allows me to add detail to my concept and rapidly play around with different ideas with the perks of both practices. My final centerpiece ended up as a hybrid between these two designs.
Since the environment was going to run in real time inside UE4, modularity would become an important tool for flexible set dressing, time and memory saving.
Before producing assets, I had to plan out the spaces and break them down into modular asset kits. Since my scene contained a mix of circular and angled architecture, I had to make asset kits that fit both patterns, or one for each, also making sure they had intersection pieces to account for those trouble-areas where floor or wall models designed for circular use would collide with straight walls or floors. I started experimenting with different wall modules and pillars for trim-pieces (Picture below). The walls would be straight but with the help from the pillars in between, it would be possible to place them at an angle with the pillars covering the gaps.
I created the larger circular shapes like the suspended roundabout (Picture above), pipes, fences and the supporting metal girders with tiling UVs in mind. All my circular assets started off as a straight, 2-4m sample depending on texel density. After the sample was unwrapped and the UVs tested for seams, I duplicated the piece to make it long enough to wrap around a circular spline the size of my target. In this case the roundabout. To prevent my mesh from rescaling under the bend deformation, I had to know how long my duplicated strip would have to be. To answer this, I would need to find the distance around my circle, or it’s circumference. This is found by taking the diameter of the circle and multiplying it by Pi (C=πD).
For set dressing pieces I had to make an asset list to constrain myself from creating redundant assets. I broke down my environment to 3 levels of detail, Primary shapes (Architecture), Secondary (medium props) and tertiary shapes (clutter, decals & small set dressing assets). I started on the largest shapes and worked my way down. This was to constantly remind me of the importance of larger shapes since they were to cover most of the screen.
My pipe kit (Picture above) was a very influential part of my secondary detail pass, as it could be used to cover ugly intersections, break up boring repetition and decorate naked areas. For this to work it had to be a very versatile modular kit.
Another kit of secondary detail assets that I made would be the support girder kit (Picture below). This kit had two texture sets, one rusty texture for my initial gritty and deteriorated look and another clean texture which I ended up using in the end. I also used these models for my suspended roundabout by deforming rows of girders to form a circle supporting the walkway.
For modeling the more organic shapes of my environment I wanted to utilize my newly adapted high to lowpoly workflow with ZBrush. An example of this would be the pipe kit’s end cap model (picture below). Even though this mesh could arguably be quickly made traditionally, I am glad I put it through this pipeline for the sake of research since it turned out looking good, and in a very short amount of time.
On this model, I started with a cylinder in ZBrush, toggled radial symmetry to shape it as ceramic on a lathe. I used ClipCircle while holding Alt to extrude the bolt chambers, then I inserted the bolts with an Insert MultiMesh tool while having radial symmetry toggled on 8. For the retopo, I used decimation master without ZRemeshing the object first, this way the crisp edges of the mesh won’t go to waste. The modeling process took under 4 minutes in this case. The bakes are clean even though the topology ends up looking ugly, but as long as the model doesn’t deform, and the bakes are clean, it shouldn’t cause a problem. This workflow seems to be a popular topic amongst 3D artists lately as Houdini, ZBrush and other modeling software solutions are introducing ways to automatically generate low-poly versions from high poly sculpts.
Two artists at Double Negative, Henning Sanded and Morten Jaeger made a good video called “Is Good Topology Overrated?” (FlippedNormals, 2018) explaining how non-deforming geometry benefits from generated retopology within ZBrush.
Michael Pavlovich has also introduced a Houdini Game Dev Toolset (Pavlovich, 2018). Introducing a workflow that lets you generate a low-poly model, LODs, UVs and much more.
Texel density became an important factor hence the scale of the environment. I initially decided on a base texel density of 1024 per square meter. From there I adjusted my texture resolution according to the amount of detail, and how close the camera would be to the said asset. an example of this would be the centerpiece where the part closest to the floor has the largest texture space. The lower parts of the centerpiece are also the part with the most detail, giving it another reason to increase texel density. The principles I used for controlling texel density is also described by Leonardo Lezzi’s article ‘Texel Density/ Pixel Ratio’ (Lezzi, 2016).
Maya has a simple feature to control texel density (image below) that allows the user to adjust, copy and paste texel density between UV Islands. I found this very helpful when controlling texel density throughout my environment.
I used trim sheets for seamless repeating geometry such as cables, tubes, floors, and pipes. With the trim sheets in place, I could model hanging cable clusters without worrying about texturing as I could easily straighten the UVs to a long strip, set the correct texel density and then adjust the UV coordinates to match the cable texture UV space.
Another way I made cable clusters was by deforming a duplicated strip of cables along a curve, then I removed the geometry outside that curve. I also removed redundant geometry such as edge loops that didn’t create any prominent angles. I can also apply several UV channels on the cable meshes for easy texture swaps.
When texturing in Substance Painter I always want to make sure I can reuse my materials throughout my project. It can be tempting to use paint layers for texture variety instead of baked masks, or even smart masks, and I see this too often with beginners. They deviate from the standards before even learning the basics. Substance Painter is designed for a smart material workflow for very good reasons. It’s faster, it gives a more natural look when using generated and baked masks, and it’s great for continuity, and in time you will have built your very own material library.
A good example of a simple, yet effective smart material is the tempered steel material shown below. To create this, I made a layer group containing 6 fill layers. Every layer represented benchmarks from the various colors produced between 400 and 730 Fahrenheit. I then created a smart mask utilizing thickness, curvature, and Ambient occlusion to paint discoloring where heat would be most prominent. I gradually reduced the global balance in each mask the warmer the layer would be. This way each discoloring would have their own shades of grey to represent and give place for the whole spectrum regardless of mesh.
Since this smart material lacks a base color to fill the background, I can combine it with any metal material I choose. In the picture below, you can see I even used custom versions of my tempered discoloring material to melt the battery plastics, discolor the copper and even discolor the paint.
I used World Machine (WM) to create the mountains since the program can easily generate useful texture masks for height, slope, sediment flow, and erosion. For those unfamiliar with WM, it’s a node-based terrain creator tool with a similar workflow as Substance Designer. When texturing the mountains, I masked out erosion, slope, and sand flow information from the erosion node to export as grayscale masks later compiled to one RBG texture image used inside Unreal Engine 4. I then used those masks to blend between 3 tiling mountain material functions (1 sand and 2 rocks). This gave me the ability to control how much sand to leave between erosions and adjust the maximum angle that sand can appear, in a non-destructive way. I could use the height map as a seed for a landscape for all its perks, but this time I kept modular by exporting the mesh from WM and decimating it in ZBrush.
For the Skybox, I wanted the planets to have their lunar libration directed correctly towards the sun no matter where I placed them. This could easily be done by having an opaque planet sphere, each with a spotlight directed from the correct perspective. But an opaque planet would block the visuals of the atmosphere, the exponential height fog, the clouds and the sky. This would reveal that the planet was actually positioned inside the atmosphere (picture below).
To fix this, I created a translucent material (Picture below) that used a VertexNormalWS (World space vertex normal) node as a mask to control the opacity direction in world space. I also mixed it with a Fresnel to add a broader eclipse. Multiplying the VertexNormalWS with a color allowed me to change the direction of the opacity in a 180-degree radius since a color can’t be a negative value, I had to add a OneMinus switch to toggle the opposite side if needed. This translucency created the illusion that the atmosphere was is in front of the planets and not the other way around.
The emissive street lights of the city-planet were created by using a roadmap of Moscow as a base mask, then I multiplied that layer with a paint layer to start adding circular highlights.
A master material (picture below) was created to provide cheap and fluid customizations to all standard materials in the environment. I created material functions for tiling detail maps inside the master material. I created an AO masked Grime function to give the textures in the scene a more uniform look. It also helped to push the dry climate look on the planet a little further. A Grunge material function was also added for the roughness channel, this material function allowed me to add tiling detail maps to the roughness channel like footprints and fingerprints, dust layers washed wiping patterns on the floor and wipe marks on windows, glass, and screens.
When creating modular kits, I often imported the unfinished versions to Unreal Engine for testing to ensure that the kit worked as it should. Under the production of the wall & panel kit, I imported the block-out to Unreal Engine and tested the unfinished kit by creating a basic hallway. This hallway worked well so I left it there for the final scene (the picture above).
Spline Blueprints was a big part of set-dressing as it allowed me to quickly install tubes and wires without leaving the engine.
Since the factories in the backdrop of my scene were so far away it would appear very small and have most of the detail washed away by the height fog. This allowed me to play with the silhouettes without worrying about low texel density, seams or awkward intersections. I allowed myself to use untextured block-out models and smaller props to shape the factories (picture below). My ruthless kitbashing allowed me to experiment with a vast variety of shapes and styles so I ended up creating 3 carefully placed factories around the exterior.
The assets used above are alternative fusion cores, some untextured lowpoly pipes and connection pieces from the main pipe kit. The red lights are from a simple emissive material applied to polygon triangles facing the environment. These assets were later texture swapped for optimization.
The following images show the different lighting moods I tried throughout the course of the project. The first 4 images show a lighting pass that was very theatrical and dramatic compared to the more realistic approach in the end result.
The last 4 images (below) display a lighting scheme inspired by Doom 2016. The images also contain the two early concepts of the centerpiece. I’m sure you can tell I had a lot of fun playing with colors, moods, and lighting on this project.
A movable directional light was used as a sunlight to create sharp shadows across the interior of the scene. I placed light bounce cards (check the video below) outside the windows to create a softer ambient light. This had a remarkable effect on places where the directional light couldn’t reach.
For the post-processing, I used a post-processing material to sharpen the image. I used IES profiles to get a more realistic look on my spotlights, and the contrast between warm and cold lights was toned town and referenced according to the Kelvin color temperature scale. Since the textures were a bit too saturated, a global desaturation on the post-processing gave the scene a more uniform and realistic look. It also helped create a dry and dusty look.
To match the dry and dusty atmosphere of the exterior environment, I made sure the skysphere had horizon and cloud colors matching the sand from the mountains. This would make the air look contaminated with unsettled dust. I decided to go for a dark brown zenith color to give the impression of a thin atmosphere with inhospitable air.
To keep the scene optimized, I distributed my lightmap density sparingly and only increased it on places where shadow artifact could be seen. This, along with keeping a fierce eye on lighting complexity can be key to maintaining good performance with optimized lighting cost. I keep a general rule to just not let too many movable lights affect the same surface area, especially if that surface has an expensive material applied.
To be honest I’m surprised I managed to pull this off. I realized early on that this would be a hard project for me to scope and the feeling that I would never see this project finished was my main source of motivation for a long time. But looking back there are many things I wish I finished, changed or did differently. My material setup has become quite messy, several assets are still in the block-out phase and I never got the time to utilize my detail maps to the extent I hoped for. The environment could really use a VFX Pass and some of the assets like the focal piece and insulation pads could be more optimized in terms of topology.
The things I’ve learned during this project have pivoted me towards a wholly new way of creating 3D art in general. In the end, I am actually glad I overscoped. If I settled with a simple corridor or decided to do fan art, I would never have pushed myself to where I am today. I might not want to right now, but I have a feeling that I will want to overscope my next project the same way I did on this one.
Fredrik Maribo, 3D Artist
Interview conducted by Kirill Tokarev