Professional Services
Order outsourcing

Creating Photorealistic Geralt of Rivia in Houdini & Substance 3D

Vishwesh Taskar has shared an extensive breakdown of the Geralt of Rivia project, explained how the character's face and outfit were modeled, and gave some tips on how to achieve realistic-looking skin.

The Geralt of Rivia Project

The Geralt of Rivia project started off just as a practice like most others but really took off as it progressed. My personal projects are mostly about experimenting with different techniques as opposed to the usual workflow that I use daily at work so as to add to my arsenal of approaches.

This breakdown was fun to make and I hope it helps you learn a few tricks and push your art to the next level. 

Let's start with the modeling.

Mr. Cavill

I did not have any photogrammetry data for the project so I started off by watching the Witcher to mentally grasp the facial forms from different angles and then gather a ton of references along the way. 

Here's a collection of the references I mostly relied on:

Once I felt I was prepped, I hopped into ZBrush, took a bust base mesh, and started sculpting the basic forms of his face just eyeballing the references at first and then adding more details and matching their placement using a matched camera lineup created in Maya. This is pretty much a standard modeling workflow that most VFX modeling artists use. 

Here are some BPRs to give you an idea about how my sculpting progressed from start to finish. 

The Eyes

I used a basic eye setup with a sphere for the cornea and an iris geo inside which I borrowed from the free Louise scan. I did some texture-based displacement over the iris and then tested it out with a transparency map on the cornea. For this project, I did not dive too deep into the eye details so I just went with what displacement I could extract out of the color map and processed it a bit using Substance 3D Painter.

The Marvelous Outfit

I wanted to give Marvelous Designer a try for a very long time so this proved to be the perfect chance. The cloak and his shirt were fun to make. Although I didn’t need to make his entire shirt since it won't be visible. Instead, I decided to weave the whole thing as I wasn't ready to let go just yet.

For the cloak, I was able to get a decent shape after a few tries using Marvelous Designer and Houdini – this helped me create a nonlinear workflow to polish it up.

I brought in the single-sided cloak geo and extruded it using Poly Extrude isolating the side group which thickens between the two sides of the cloth to create a strip used to scatter the open weave ends of the cloak. The scattering setup was just a simple setup using the Scatter and Copy to Points nodes.

Let it Snow

For the snow geos, I created another scattering setup driven by a top-axis density mask generated using Mask by Feature. Four different snow geos were created using spheres layered with mountain deformers and the scatter across the cloak also with similar scatter setup as the open ends.

I had placed switches to help me render test multiple sets of values to see which ones work the best.

Ah! What would I do without nodes?

UV and Textures

Since I had different plans for Henry initially, I had gone with non-butterfly UVs and stuck with them. Generating the UVs for the outfit was a no-brainer thanks to Marvelous Designer.

For the face textures, I started off by using one of the Multi-Channel Faces from Texturing.xyz and sticking it on the face using R3DS Wrap and later refined it in Mari. There are tutorials on YouTube that can help you understand this method in detail.

The Mari-thon

Once I had the wrapped textures transferred onto the main head geo from Wrap, I started the texture cleanup. Filling in the problematic projection areas and also replicating the texture details on the actor’s face as far as possible in the color map at first.

Later on for the displacement, I created a few merge nodes in Mari shuffling out the different displacement channels present in the RGB channels of the XYZ map onto different layers for better control of the displacement levels of the map and mixed it with some other skin pore maps. Tweaking and mixing the intensity levels of the multiple maps was made easy due to the real-time shaders available in Mari.

I used Mari to work on the color and base skin displacement and here are some snaps to help you understand how I went about it.

Substance-al Updates

After I had a decent color and displacement map extracted from Mari, I took them both into Substance 3D Painter to create more secondary maps like roughness and region isolation masks and also add more fine detail to better match the micro details on the actor’s face. The powerful viewport helped me minimize the amount of iteration renders before arriving at a look that worked for me. Here is a GIF to show the layers of information added to the existing map.

For the cloak, I created seamless texture maps and lint masks using Substance 3D Designer so no UDIMs for this one. I did use a few supporting textures to minimize the work there so it's not 100% procedural. The Tile Sampler node helped me quickly replicate the bubbly weaving pattern of Geralt's cloak.

For his shirt, I did not dive too much into the details since it was mostly gonna be covered.

Here are the texture AOVs so you can have a look at what the textures look like along with some custom object ids for compositing.

Houdini X Renderman

For the project, I used ACES (REC2020). I decided to stick with the pxrSurface shaders for the most part as I also wanted to test out XPU with this project although I did not use it for my final renders.

The Skin Shader


I used the non-exponential path-traced mode to get the best possible scattering for the skin.

One of the most crucial isolation masks I create for the skin lookdev is an RGB separation/isolation matte consisting of all the areas that are not heavily blocked by the skull such as the nose tip, ears, eye pockets, and lips. Usually, these are the areas that need to have distinctive sss values as compared to the rest of the face. This approach enables me to gain individual control over each section and freely tweak the SSS.

I use a pxrLayerBlend to create a layer stack with all the masks connected so that I can have different DMFP values for each isolated section. This approach can be used to gain more control across the properties of the shader. I spent most of the time playing around with these values during lookdev, trying to get the skin to look as realistic as possible 


I used the primary and clear coat layers to have separate control over the underlying specularity of the skin and the oil/sweat layer on top. Again, I'm using the isolation masks here too to gain some more control over the spec levels and roughness in different sections of the face. 


I used a layer setup to control and blend the two levels of displacements that I have. One is a sculpted Displacement from ZBrush and the other is the skin pore displacement made using Mari and Substance.

A few tips to get realistic-looking skin:

  • Getting the SSS right is crucial so make sure you have photo references that show the light penetration in the skin in different areas. They may not necessarily be the references of the actor if it's a likeness project. Portrait Lighting Compendium by Rachel Bradley is a great reference pack for studying skin.
  • Try to get a light penetration that is just right. By that I mean if you push it too much you will end up with something that looks more like wax. A lesser value might make it look hard.
  • Penetration color (dmfp color). If you make it too red it's just gonna look gummy.

The Yellow Eyes

As I mentioned before, I have a cornea/sclera geo with a transparency map and an iris geo. The important thing to look out for is that the sss should be active only in the non-refractive areas of the cornea. I had a pretty basic displacement and bump on the iris extracted from the color map to give it some depth. 

One important thing to note while working with eyes is making sure that you have the correct settings for the caustics. If you are using the pxrPathTracer, make sure that they allow the caustics option to be checked on and if you are using pxrUnified, then be sure to use the manifold walk to get good caustics on the iris. 

I was able to use thin shadows with caustics with the pxrPathTracer to achieve an iris look that behaves a bit like a cat iris when light falls directly on it. For this, just keep the allow caustics option in the pathTracker on and use the thin shadows option on all the lights.

Normally, for human eyes, I would recommend using the thin shadows option only if you are not using caustics and if the caustics are on, then keep the thin shadows turned off.

The Fuzzy Cloak

This was indeed a blast. The texture values were pretty much accurate directly coming in from Substance 3D Designer but the true challenge was getting it to look thick and fuzzy. Although most of the fuzzy look came from the groom, I added an extra fuzz layer using the generated lint masks in the fabric shader itself which really helped in selling the look of the thick fabric when combined with the groom. I also used the pxrFacingRatio to alter the albedo around the silhouettes to add to the fuzzy edges. 

Fuzz Is Not Enough

I used a standard guide and hair-gen setup with some bend and frizz modifiers to cover the fabric with fuzz.

Upon closely inspecting some references I saw that I was missing some thread blobs present all over the fabric surface. Instead of using a scatter setup I used the same guides from the fuzz and created a hair-gen with a lesser density to get the placement of the blobs. For the roundish shapes, I put a thickness ramp to make the hair geo look blobby. 

So in all, I had three layers contributing to the fuzz, the shader fuzz, the frizzy groom, and the thread blob groom. 

I was incredibly happy to have a back-and-forth workflow for the entire setup right from the single-sided geometry in Marvelous Designer till the groom. So I went back to tweak the forms multiple times before I arrived at something that worked for me.

The Snow

For the snow scattered on the cloak and hair, I put a pxrVolume shader with a higher density value to get very soft-looking snow chunks. 

Wherever I felt the need for more control either in lookdev or comp, I made sure to create more isolation masks and matteIDs. I can never get enough! 

The Medallion

If it wasn't for Lama, this might not look as good. The tail attribute in the Specularity pushed the level of realism of the micro-scratched surface of the medallion. 

A few things that helped me with lookdev in Houdini:

  • Creating multiple ROPs with different sample settings and isolated objects helped me focus and debug each aspect of the character with ease. I had set up separate ROPs for the face, cloak, and medallion each with different sample settings to help me speed up my lookdev process.
  • After spending some time setting up the cameras and ROPs I did not lose time trying to hide or isolate specific elements which made the process hassle-free.
  • Forcing object mattes also helped me render out specific objects separately if needed for comp or if I only wanted to re-render a specific element rather than having to render the entire frame again.
  • Taking time to set up basic switches to quickly change the shaders, geometries, etc. to speed up look development of multiple aspects of the scene.
  • Splitting up a complex scene is incredibly easy for the artists that have lower configuration workstations so try and create different render rops for parts of your scene with relative reference links so that the parameters like samples remain connected. 


Let's start off with the easy bits.

The eyebrow and eyelash curves were generated manually in Maya using the matched cameras and then put into a standard guide and hair-gen setup in Houdini. 

Fuzz Right-Here

Never has the peach fuzz and stubble been more fun to work with. The guide advect method mentioned in the "Hick to Hipster" tutorial is perfect to create short hair/fur with a smooth flow and quickly get a good result. 

Hair Groom

I used the same advection technique to generate the hair on his head which is pulled back.

For the curly dangling bits in the front, I created straight guides with a uniform length using a mask on the head geo. I then simulated them using a guide simulate node to get a quick basic form of how the hair should fall. Then I dived into the guide groom and started adding guide modifiers like clumping along with curls to get the wavy forms and that was it. Later, I manually adjusted the guides to get the form that I wanted. If the network gets too complicated, I just cache the guides out in alembic for optimization.

Once the guides were ready I had to work on the hair-gen quite a bit to achieve the overall look and feel of the wavy clumped hair.

Adding a Bit of Motion with a Twist (Bend)

In many of my renders, I wanted to create a sense of wind in Geralt’s hair to add more atmosphere. I did not go all the way and simulate the hair to achieve this. Instead, I went with a simpler approach. I just need a bit of motion around the ends of the hanging hair clumps so I added a bend hair modifier in the hair-gen and bent the hair in a constant direction. Then just keyed the blend attribute on the modifier to get some motion in the hair strands. Surely, a lazy smart solution but it saves time and computation power.

Due to the node base workflow, I was able to successfully keep sculpting my model and checking it with the groom by transferring the guides onto the new mesh using the guide transfer method. You could also use the guide deform node to do something similar. 

For the hair shader. I used pxrHairColor with pxrHairMarschner to get some quick and easy variations working for the hair strands. Pretty straightforward setup without any maps. I used a color-correct node to gain more artistic control over the hair color.

Snow Is in the Air

My first thought was to use particle simulation but I came across a lighter method to create falling snow particles using Attribute VOP watching "Realistic CG Dust – Free Houdini Tutorial" on YouTube. The basic idea is to alter the position and orientation of the scattered snowflakes in the air to create falling snow.

The setup involves creating geometry and generating points inside it. Once we have random points in space, we need to randomize the scale and rotation to create variations. Then this needs to be plugged into an Attribute VOP to animate the position of the points with respect to time.

Attribute VOP for snow particle animation:

Light It Up

Once the asset lookdev was tested with a bunch of HDRIs and test lights. It was time to have some fun with lights and create different moods.

By this time, I had a ton of awesome lighting references from the series. So I narrowed it down to a few lighting scenarios that looked the best and tried to replicate them. 

I would study the lighting in the reference image and try to break it down in my mind. 

My process would start off with finding an ambient HDRI which resembles the background in the reference. Polyhaven is an awesome website to get a variety of HDRIs. I gradually started adding more lights to try and get the overall look and feel of the reference.

I did not have to do anything too fancy with the lights to achieve the looks I needed.

For this example below, adding a dome light for the environment and a rect light for the fire could have been sufficient but to a bit more character separation and show off the fuzz on the cloak, I added two rim lights. Adding lights to the scene needs to be tackled right, for instance here the additional rim lights needed to feel the part of the scene as well so I went with a setup that feels like mildly diffused moonlight.

While tweaking the lights. Be sure to use a lower render resolution combined with RenderMan’s denoiser so as to speed up iterations and save time. I created a separate lighting ROP for this purpose which used 1/5th of the camera resolution and only the most necessary objects.

Even for the final renders, optimization was necessary. I rendered the objects in the scene in parts, mainly to have separate control in terms of the overall character, hair, and eyes. Having multiple integrators and ROP sets helped me render different objects with specific sample settings as per the complexity of the shaders. I only had to push the samples higher in the objects that needed them, the best example, white hair.

Here are some snaps of the different sample settings I was using in various ROPs.

Without motion blur:

With motion blur:

For eyes:

For hair:

It is advisable to use a low render resolution like 540p and RenderMan’s denoiser while blocking the lighting to save time.

Depth of field while rendering was added later using Nuke and a depth pass. I did render using motion blur to create a sense of subtle wind movements in the hair.

Compositing to the Finish

This indeed is one of the most crucial stages to making the frame look cinematic and polished.

Here is where I fine-tuned multiple elements as per the need of each frame such as the specularity of the skin, brightness of the eyes, etc. in Nuke. I added depth of field using the ZDefocus node using the depth pass. For most of the frames, I use the lighting HDRIs as backgrounds and just added some blur using a defocus node. The final touches to the frame were doing some post-processing like cinematic color correction based on the lighting references and adding film grain and chromatic aberration to make the frame feel as if it was a snapshot of a movie. 


For the people wondering how I worked with ACES for this project, here are some snaps of the settings used in multiple software.


Color Channels (Albedo):

Scalar Channels (Roughness, Displacement, etc.):

RenderMan "IT" viewer:


That’s all folks! I hope this breakdown gave you an insight into my thought process and workflow. I thank you all for taking the time out to read this elaborate article. Please feel free to drop in a message on my ArtStation page or Instagram if you have more questions.


Vishwesh Taskar, Lookdev Artist

Join discussion

Comments 1

  • Anonymous user



    Anonymous user

    ·a year ago·

You might also like

We need your consent

We use cookies on this website to make your browsing experience better. By using the site you agree to our use of cookies.Learn more