Creating a Slaughterhouse in UE4: Part 2

Creating a Slaughterhouse in UE4: Part 2

Jay Cummings wrote a detailed article about the production of his recent project Slaughterhouse made in UE4. Part 1 covered the pre-production phase, software used, blockout, asset modeling, materials, and foliage. In Part 2, Jay discusses his approach to photogrammetry, lighting, post-processing, effects and additional materials included in the appendix.

In case you missed it

The first part of the article



I used Reality Capture and a Canon EOS 1300D to capture and process the photogrammetry elements within my scene. Reality Capture works by calculating the angles and locations of the camera in 3D space, then using high contrast areas of the images it’s given to create a dense point cloud. This cloud can then be further refined into a high poly mesh, even storing vertex color information that can later be extracted to form an accurate albedo map. It’s highly regarded as the fastest and most accurate software for photogrammetry, which is why I chose it for this project.

One area I used photogrammetry for is the creation of ground clumps – a process seen in games like Farcry 5 and Hunt: Showdown. To start, I capture around 100 or so images from different angles and distance in RAW format, batch process those images in Adobe Lightroom and then feed them into Reality Capture to generate a dense point cloud, high poly mesh, then embedded vertex colors.

From here, I use 3ds Max’s ProOptimize modifier to give me a mesh more appropriate for real-time use, going from 6 million polygons to 900. I then use an FFD 3x3 modifier to create a rounded appearance, making it easier to intersect without leaving geometry floating above the terrain surface. Texture information is then baked down and refined in Substance Designer using AO Cancellation, sharpening, and color adjustments. Typically it’s best to avoid using ProOptimize as the topology can require some clean-up after, which even still results in some triangles. Normally I’d be more careful with this, however, I can justify it as the mesh doesn’t deform and the asset shades correctly in-engine.

Implementing this in-engine required the creation of an optimized intersection blend shader, so I followed some resources online to create a Material Function that can be re-used between shaders. This technique relies on Mesh Distance Fields to determine when meshes are intersecting with each other. This then generates a parametrized mask that is blended alongside a noise texture sample. Then a pixel depth shift with dithering is applied to fade out the intersecting areas. This functionality is easily added to any master shader by plugging the function into the Pixel Depth Offset input.

Additional uses of photogrammetry for assets and tileable materials can be found in Appendix.

Lighting & Post Processing

The initial plans of having a warm scene juxtaposed with the gritty art direction fell through towards the beginning of the project. I switched to a much darker approach as I felt it fits the art direction of Hunt: Showdown closer, as well as giving me more control over what I want the viewer to see, as well as being able to highlight areas of interest and potential paths that the ‘player’ can maneuver through.

The first step was to ensure the World Settings are adjusted to allow for additional light bounces, higher indirect lighting quality, Distance Field Ambient Occlusion and more. These settings are lowered somewhat to ensure build times don’t take too long, but it’s worth keeping in mind for the final pass. I’ve also turned on ‘Generate Mesh Distance Fields’ in Project Settings to allow the use of DFAO and Dither-based Mesh Blending on the intersection.

Following this, I set up my Post-Process Volume (switched to unbound) and locked the Auto Exposure min/max to 1.0. You can also un-check Eye Adaptation in Project Settings to permanently disable this setting.

As I’m handling an exterior scene, I need to designate a Lightmass Importance Volume to ensure I’m not baking large portions of the landscape that won’t be seen. Following this, Lightmass Portals are used in the larger opening to the building to boost the amount of Global Illumination on the interior.

I’m using Volumetric Fog within the Exponential Height Fog component to create a much more atmospheric, gloomy appearance. This affects the behavior of the lighting in the scene quite drastically, so blocking this in early was very useful.

In order to get more control over the lighting, I take advantage of Inverse Square Falloff options and tweaking the settings to get the desired effect, whether it be the interaction with specific materials with Min Roughness or a harsher falloff with the Light Falloff Exponent. These options aren’t physically accurate but are useful for unlocking the full potential of individual lights.

Reflection captures are also used to correctly render reflections on reflective surfaces. A planar reflection capture has also been used on the puddles that litter the ground to achieve a high-quality reflection of the slaughterhouse, adding another point of interest in the various camera angles.


Wrapping the project up, I’ve reflected on all aspects of this environment’s creation and there are definitely things I’ve learned along the way. Initially, I felt less strong in areas like ZBrush, Marvelous and tree/foliage creation, so I intended for this environment to push me to become more comfortable with the involved processes. The overall scope of the project towards the beginning was too much, so I’m happy that I chose to focus on the exterior and only flesh out the interior if I deemed it possible.

With ZBrush, I’ve come to be much faster in the sculpting stages, using all available brushes in my library to quickly draft out forms that before I wouldn’t have attempted. The boar is a huge example of this, so getting that finished to a decent quality is definitely an achievement that has come from this project.

Marvelous Designer was another skill that came from this project. Experimenting with this software was invaluable as it boasts fast but incredible results with the possibility of importing custom avatars (meshes) for interaction during the simulation. That said, I feel like there is still a lot to explore with this program, so I will likely continue to learn the more advanced areas with my next personal project.

I owe a lot to my university supervisors for convincing me to author my own foliage for this project. Originally, my intent was to utilize Megascans and have immediate access to their scan library, however, I now have a practical understanding of the creation process and the various workflows involved, and will definitely use this knowledge with later projects.

There were some additional areas that I wanted to look into if more time were available, mainly mesh decals for optimal detailing and more unique decal work. Furthermore, I wanted to further push the visual storytelling of the piece with more unique assets that tie into potential characters within the scene.


Research Appendix

To further aid with lighting my environment, I looked at the popular Color & Light: A Guide for the Realist Painter by James Gurney. Despite its intended audience being painters, it contains full breakdowns of color properties, atmospheric effects, and lighting conditions. This gave me a solid understanding of where to start with lighting my structure, how heavy fog diffuses the light and later on, how I went about re-lighting from scratch to create a moonlit scene.

1 of 2

Finally, I spent a lot of time watching 51Daedalus’ Unreal 4 Lighting Academy videos on Youtube. He assesses existing environments and breaks down the good and bad areas in the lighting, then restructures it from the ground up while explaining his process. This was very valuable in understanding how lighting and color aid certain compositions and visual styles.

Given the number of corpses and organic pieces that make up this scene, I needed to ensure I had enough resources to work with to ensure correct anatomy and form. My supervisor directed me to a really informative Sketchfab page from Idaho Virtualization Laboratory, which contains scan data of skeletons/bones from hundreds of animals. Additionally, a blog from Jean Lafitte’s Swamp Tours that documents the animals of the swamps of Southern Louisiana helped in narrowing down what potential creatures could be created, whilst still maintaining a level of accuracy for the environment.

I utilized PureRef throughout this project in order to create reference boards for specific development areas. This is an extremely useful tool that allows you to create a large canvas of high-resolution images, all whilst panning through, making transform/cropping adjustments and more, all with a little performance impact.

Furthermore, I looked to the artists involved in the development of Hunt: Showdown for inspiration. Throughout the project, I referred to the quality of their work in personal renders and in-game screenshots. Artists like Sebastian Stolaczyk, Ron Frölich, Lars Sowig, Matthias Wagner, Marcel Schaika, Ivan Tantsiura, Seid Tursic, Alvaro Canizares, Alexander Asmus, and Maren Gerbach were all included in my reference sheets, which helped me to better replicate and understand the art direction of the game. Other artists like Harley Wilson with his Hunt: Showdown fanart were also useful points of reference.

To prepare for the practical workload, I narrowed down the ZBrush brushes I’d be using for this scene, as a lot of organic sculpting was necessary.

These are the main brushes used, all serving their own purpose in the sculpting process. The only downloaded brushes are JRO Tools’ Wood Damages and Fredo Gutierrez’s free Wood & Bark brush pack.

To speed up my 3ds Max workflow, I worked with a few scripts that increase functionality and streamline certain processes that I use frequently. The biggest one is IFW Normals, an expanded face weighted normals script that has the inclusion of setting a coplanar threshold to achieve more accurate results. I also use a personal Drag and Drop Reference script that automatically sets up a chosen reference image in my viewport with correct dimensions, with an adjustable view angle.

For the creation of my trees, I looked into other attempts at the same tree due to the unique forms of the bald cypress. I stumbled upon Agnieszka Nogalska’s swamp environment which heavily influenced my project and the workflows I utilized. This was especially the case with her Bald Cypress breakdown post on Artstation.

Creation Appendix

I used ZBrush to create all of the organic elements of my scene, namely the corpses and gruesome details. I’ve gained a lot of knowledge with this program by tackling an exterior scene, so I gradually became more and more comfortable relying on ZBrush to add additional details to my assets. The boar was the biggest challenge as it is used frequently within the scene to convey the darker mood, as well as to adhere to Hunt: Showdown’s art direction. I also needed to make sure there were variations that could be re-used without clear repetition.

With a reference board to support the boar’s creation, I feel like I made it sufficiently disgusting for the purpose it needed to fulfill. These pieces were scattered around, formed into piles as well as left hanging as a focal point in the final environment.

I also used ZBrush for some elements of the landscape, namely ground clump assets to be blended into the ground alongside my photogrammetry assets. These were a fun organic sculpting challenge as they needed to somewhat match the quality of the scan blendables in order to look consistent.


In order to create a littered world, I figured the best method would be to have assets that can adhere to my trim sheet. Thankfully, this world consists of a strong mixture of metal and wood, so this was easily done. I referenced a lot of Hunt: Showdown screenshots in order to choose the assets, and created the props that could be re-used the most.

Due to the trim being fairly simplistic, I was able to really stretch the re-use factor. Working with a trim sheet to this extent is incredibly efficient and is often a standard workflow within the industry to save on texture memory. I further broke up the repetitive nature of this workflow by incorporating packed grunge mask overlays, lichen, rust and wetness to further ground the props in the environment.

With Marvelous Designer, I went through multiple iterations of where I wanted fabric placed within the scene. This was an interesting learning curve as I didn’t want to throw in fabric for the sake of it – I wanted it to feel somewhat believable, whilst also maintaining a good balance of visual noise in the composition.

Initial experimentation was conducted into photogrammetry, with my first attempt being an older tree bark scan that I hadn’t processed. I used Reality Capture to generate my dense point cloud, followed by my high poly mesh with embedded vertex colors, then followed a tutorial from Grzegorz Baran to produce a tileable material from my scan data.

This process involved using a PLY plugin to import the mesh into ZBrush with vertex color retained. From there, I used the Topology brush to create my low poly bake plane on the surface, then UV Master to unwrap and flatten it as close to a 1:1 ratio as possible. From there, I baked the exported high/low meshes within Substance Designer to get my Height, Bent Normals, Normals, Albedo, and Ambient Occlusion. Bent Normals is important as the heightmap can be destroyed by going through this pipeline, so we can salvage a makeshift height map from the bent normals if required.

From here, I imported the baked maps into Substance Painter, adding a paint mask to my fill layer and setting all individual channels to Passthrough. This allows me to use the Clone Tool to transfer detail and paint out irregularities, ensuring it all stays tileable. This method of making a tiling material gives the user more control and makes it possible to affect all textures at the same time.

With a tiling material finished, I feel all the textures back through Substance Designer for final delighting and refinement.

For delighting, I use Color Equalization, AO Cancellation, Sharpening, Height Correction and Roughness creation using Curvature Sobel and noises. To finish, I use RGBA Merge to pack my grayscale textures into channels for optimization.

This process proved to be perfect for learning this photogrammetry pipeline, as well as getting used to the general workflow of processing scan data. I learned a lot from creating this bark, despite it never being used within the scene. This is still a viable workflow for scan material creation, so I’ll definitely return to this for later materials.

For the creation of the cypress tree bark that was used, I followed a great tutorial from Peter Sekula that shows the full creation process within Substance Designer. I made use of the custom nodes that he offers which condenses mask creation into smaller functions, namely ‘Get Slope’ and ‘Height Selector’. These nodes alone are incredibly useful for determining sections of the heightmap to create masks for use with albedo and roughness creation. If more time was available, I’d like to re-visit my cypress bark material as I feel like a lot of major forms are missing, resulting in the light not interacting with the trees as nicely as I’d like them to. Regardless, I learned a lot from the tutorial.

Substance Designer was also used to automate the majority of my decals within the scene. I used a great tool from Bruno Afonseca called Atlaser. This takes multiple image inputs allowing you to create a texture atlas for use with a flipbook in-engine. Using this, I create an atlas for my grime, blood spills, and maggots, saving a good amount of texture memory in the process.

I combined this in-engine with a toggleable randomized selection based on the actor’s world position. This allowed me to scrub through the atlas by moving my deferred decal actor, saving me from having to randomize the value manually. I also made the shader with the additional switch functionality of UV distortion, mainly for the moving maggots that were placed on corpses.

Real-Time Elements

As per my deliverables stating to have real-time renders of my scene, I wanted to push the dynamic elements within the environment to help it feel more alive. This, alongside my cinematic flythrough, should help sell the idea that this is a game level that the player can explore.

I started initially with movement in the foliage, adding a subtle sway using panning, UV distorted noise and world position offset. Foliage is a large part of the scene, so making these elements move makes a large difference, especially with the trees in the background.

The generator is a hero asset within the scene that is powering the electrical light sources within the building. Thankfully, due to the original generator concept coming from Hunt: Showdown, I was able to hop in-game and record some footage of the machine functioning, that way I had perfect animation reference. Implementing animation was a new territory for me, but after some struggles with bone importing and baking animation, I managed to get a satisfactory result. I’d like to look further into this process to get smoother results in the future, as there are some issues with the baked animation on the piston that uses a LookAt constraint within 3ds Max.

The cloudy skies passing over the moon is another effect I wanted, which was difficult given the fact I’m faking the moon with an image plane, rather than a dedicated skybox. To get past this, I use a panning cloud noise that multiplies on my emissive and albedo value, which is matched to the speed of my cloud movement.

I also utilized real-time physics simulation for all of my fabric and hanging chains/noise traps in the scene. These were tied to a directional wind actor, so when the project is simulated, everything will dynamically react, adding more movement to the composition. Whilst this technique is quite expensive, I found it justifiable for a portfolio piece, and faster to implement in comparison to baked down looping animations via bones.

I also played around with particle effects in the form of moths flickering around the lantern light. This was done through mesh-based particles with an animated shader driven through sine and vertex colors for the flickering wings. For the particle system itself, I used a combination of a sphere spawn, point attractor and orbit to create a randomized orbit path for each sprite instance.

Stretch goals for this included rattling sheet metal, the centerpiece boar swinging, doors swinging and even lantern lights moving through the back forest. These are areas of polish that I would’ve liked to implement in hindsight, however, were cut due to time restrictions.

VFX was one of the final additions to the scene. It is the one area in which external assets were used as it is the furthest from my area, and is likely to not crop up as a requirement from an environment artist. I chose to use assets from Soul: Cave, a project available free to download from the Unreal Marketplace. I primarily used the Mist VFX to create a soft rolling fog emitting from the swamps, which was the final touch in this animation pass.

Jay Cummings, Environment Artist

Read about the pre-production phase, software used, blockout, asset modeling, materials and foliage in Part 1!

Join discussion

Comments 0

    You might also like

    We need your consent

    We use cookies on this website to make your browsing experience better. By using the site you agree to our use of cookies.Learn more