Studying Set Dressing of Bloodborne with UE4

Gnomon graduate Thom May talked about the production of his amazing Bloodborne-inspired environment.

Thom May from Gnomon School of Visual Effects talked about his incredible scene, which was created as a term project. He was kind enough to talk about the production of the environment, the choice of lighting, the creation of materials an assets. It’s a super detailed breakdown, which can definitely help you to learn a few new tricks.

Introduction

My name’s Thom May, and I’m in my last term at the Gnomon School of Visual Effects, here in Los Angeles. I got my bachelor’s at UMass Amherst and a certificate in Multimedia from PCC (Portland). In the past, I’ve worked as an animation intern and as a 2D game artist.

Scene production

Before starting this project, I was playing a lot of Bloodborne, and I thought that the way that they approached set dressing was really smart – I wanted to play around with some of the same ideas, and the project mostly just spun out of that. For instance, they do a lot of mixing tileables with simple inserts (floating meshes made to match the underlying tiling textures), which is a central component to the cobblestone look here.

Illusion of a Bigger Environment

It’s funny: there’s actually a huge landscape in this scene that the camera never sees: when I started the project, I was learning a lot about Unreal Engine’s landscape tools, World Machine, and Speedtree. But the focus was always meant to be on the bridge, and after a while I just ran out of time to give the landscape the treatment it would’ve needed, to be at the same level. I went through a handful of pretty awful lighting iterations and eventually ended up on this more desaturated harsh lighting, and a lot of the background details just get obscured in shadow and super thick N64 fog.

Production of Meshes

My typical asset workflow would be something like this: first, block things out in Maya, and get it into the engine right away. Check the general proportions and how it actually looks in Unreal’s camera against the other elements – there’s no point in spending time doing a fancy sculpt, if the asset doesn’t fit. I would say that at this stage, I’m also trying to think (very broadly) about texturing: what size-ish is the texture probably going to be; how am I going to break it up if it’s an especially huge asset; which parts might have repeated textures. For this project, it’s a big area and the camera was way out, so I was actually working at a texel density of only 256 pixels/meter, which meant I could get away with a lot.

unnamed

Next, I’d bring the blockout into Zbrush. With my ID map in mind, I keep the high-res sculpt broken up into basically as many pieces as possible. I don’t get too granular in the sculpt, because most of the fine detail is better handled during the material application.

unnamed (1)

Once the high-res is done, the low-res can either be derived from the blockout (if you’re lucky/smart), or retopologized from scratch, or even derived from a cleaned up Decimation Master mesh. Actually, the latter method’s probably what I do most of, lately. So, in general, I’ll make a version of my tool where a lot of the subtools get collapsed together (any pieces with a lot of contiguous geometry), dynamesh those pieces together, decimate that down into the thousands, then spend a while in Maya cleaning it up.

Cleaning up decimations is easily the biggest pain point in my workflow right now: Decimation Master is great at efficiently optimizing on the peaks and valleys of your sculpt, but the price you pay is all of these crazy thin triangles and weirdly folding geometry. So I usually just take a while and hunt around the mesh, flipping edges, or manually redrawing topology where I need to. In general, I find that decimation works better for organic assets, and less-so for geometric assets, or anything with super directional surface features.

unnamed (2)

Building the Materials

I’m a huge fan of the Allegorithmic tool set. I used Substance Designer for most of the larger tileables and basic materials in the scene, and Substance Painter for the unique assets. Once you establish a good workflow for breaking down the components of your pieces as you go along, you’d be amazed how quickly the process can go: you just have to be smart about how you’re setting things up, ahead of time.

When I’m building an asset, I think a lot about how I’m going to isolate pieces during the texturing. So, it’s kind of like organizational tiers that allow you to quickly mask things at different levels: at the top, you’ve got the mesh; then its texture sets if there’s more than one; then the different shells that make up the low-res; then the ID map baked from the high-res can isolate areas at the finest detail level.

So the workflow is typically something like: block-out, sculpt, retopo, then jump back to the sculpt and polypaint fill the subtools (for the ID map), then finally collapse the high-res sculpt into pieces that correspond with the low-res shells you created during the retopo. Substance’s baker has a great functionality that allows you to name-match the low and high meshes, so you don’t have to explode anything for the bake: this is my favorite thing. It’s super important/excellent when iterating on an asset, because it makes it 100 times faster to make changes to the low or the high and then just hit “bake” again, without having to do any special set up first.

unnamed (3)

Now, in Painter, it’s usually a matter of setting up a folder structure for the basic material blockout, masking out the folder groups appropriately, and building up gradually from there. I do a lot, but as a for-instance, one little trick I’m fond of lately, is using the position map to drive subtle gradients over the entire mesh. So, you can bake out a Y-axis position map, then use it to drive a veeeery subtle overlay, or desaturation, grading up from the base: perhaps combine it with a small contribution from the world-space normal map for an interesting effect. Adding up a lot of little things like this helps break up a paint job and lead the viewer’s eye where you want it to go.

unnamed (4)

Combining Tiling Textures and Vert-painting with Mesh Inserts

So, when it comes to ground dressing, it’s fun to play around with exotic ways to drive parallax: using tessellation or some sort of parallax occlusion mapping – but we’re always told that these methods can be pretty expensive, and that polygons, conversely, are relatively cheap these days. Given that, using vert painting in conjunction with inserts make a lot of sense to me. This has the added benefit of being a universal solution that will work across all engines – plus they give you a lot of control, too, which is nice. Here you can see some examples of how vert painting is used on the main path.

unnamed

Then, inserts are fitted in around the tiling cobblestones, so they match.

unnamed (5)

One tricky thing that came up with this particular piece, was that materials like the grass and puddle layers are set up to work on a height-lerp bias. This uses the brick’s height to make it look like puddles are filling in the cracks first – which works great, except that where there’s only grout and no bricks, we don’t want it to use the brick bias; we just want it to spill on a random noise bias (otherwise, it would look like invisible bricks are occluding the puddle). The solution is to set up a lerp that switches between the two biases for that layer. This lerp “switch” is driven by the cobblestone/grout layer, beneath it. Essentially, this tells the puddle or grass layers whether or not there are cobblestones where you’re painting, and if there aren’t, it uses the alternate bias, instead.

unnamed (6)

When it comes to tileables, one of the great things about vert painting, is I can re-use the same material functions all over the place, without having to redo them. So, for instance, there are grass and dirt material layers that show up on pretty much everything. I also like to throw a quick layer of grime on things: you don’t need to make a whole new material function for this, just multiply down your underlying base color a little bit (and maybe tweak the roughness, as well), and lerp it on a noise bias with low contrast. I use this all the time: it takes 5 seconds to set up, and you’d be amazed how much mileage you can get by just tossing a few grimy splotches here and there.

unnamed (7)

Lighting

Nothing too crazy in the lighting: there’s a bit of post process going on, and the fog also helps a lot. Originally, as I said, it was a much bigger open landscape scene, so the lighting was all dynamic. But as I started adding more and more crap to the bridge, I began to get a little bit of shadow popping, so I did end up baking it all down, in the end. We have a lot of brilliant instructors at Gnomon, and Kyle Mulqueen helped me a ton with the lighting – he also gave me a great tip on how to handle the lamp bloom. Emissive bloom in Unreal Engine is super cool, but it can be a little unwieldy to dial in just right: most of the bloom effect you see on each of the lamps here is actually just a single static particle, holding an additive card. Being a particle, it’s always camera-facing, and it’s set up with a depth fade bias, so you don’t see it clip into the lamp mesh. Simple, but effective – and pretty clever!

unnamed (8)

Challenges

There was a ton of weirdness in this scene. For instance, the door was also a little tricky, just because it’s massive and it pushed my texel resolution a bit. I ended up making the metal portion a unique texture set, and then the wood slats in the middle are a vert-painted tileable. That lampost was also an interesting exercise: sculpting swoopy filigree is really difficult. As with clay, when you start pushing and pulling these pieces around, it can be really hard to maintain a clean arc. A great trick for this, is just go find some filigree patterns online, then clean them up and make alphas out of them. Bring the alphas into zbrush, and use the “make 3d” functionality in the alpha menu. Boom. You’ll usually want to dynamesh these pieces into each other, and clean them up, or sculpt dings into them and whatever – but it’s a great (and fast!) starting point.

unnamed (9)

In all of these things, I think it’s all about just trying to balance what works with what’s going to look good, and what’s efficient (in the sense of both the scene’s frame-rate, and the workday hours). One approach might make sense for a certain asset, and not another. Anyway, it was a fun project! Thanks for taking the time to check it out.

thom-may-highresscreenshot00000

thom-may-highresscreenshot00002

thom-may-highresscreenshot00004

thom-may-highresscreenshot00013

Join discussion

Comments 0

    You might also like

    We need your consent

    We use cookies on this website to make your browsing experience better. By using the site you agree to our use of cookies.Learn more