this is so cool! as someone who doesn't really understand the process of making simulations, i liked the explanations and it makes me appreciate the amount of work and creativity that's involved in making them.
Godot forever s2
Şefki Ibrahim shared his workflow of producing digi-doubles of real people. Zbrush, Maya, Photoshop, Mari, Arnold & Nuke were a great help with that.
Hi there. My name is Şefki Ibrahim. I’m a freelance Character Artist from London and I’ve been working professionally since January of this year. Before that, I was studying a Master’s in Computer Animation in London. I stumbled upon the world of 3D by chance: a family friend gave me a bunch of software to try out at the end of 2016 simply because I liked to draw, and I haven’t looked away ever since!
I’ve always drawn portraits since I can remember, and I was always obsessed with how realistic I could make things look. So, when I started learning 3D, my only focus was to work on photorealistic 3D characters.
It’s only when you begin to work on digital humans that you realize just how complex the face is. From understanding facial anatomy right through to the science of how light reacts with human skin and how to emulate that in a 3D renderer. The hardest thing with digital humans is getting them to feel relatable. There’s a fine line: if something feels a little off, you dip into the uncanny valley, which can distract from your hard work. Usually, that ‘thing’ is the eyes.
Start of Work
I’ve always started off with a base mesh, usually one that I’ve modeled myself with animation-ready topology. The most important thing when starting any sculpt is to collect references. I use PureRef to collect a bunch of reference images, normally sourced online (these images will also accumulate as I work). I’m taking over 100+ reference images comprising of close-ups of the eyes, skin, hair, different lighting scenarios, portrait compositions, etc.
In ZBrush, I always append various scanned models in my scene as I work, allowing me to discover nuances in the face that are not usually present in 2D images and photographs.
The way I work with XGen is by using the ‘placing and shaping guides’ option. I usually create a few descriptions that define different areas of the hair, such as the main body of the hair (which acts a base), extra clumps and strands, which break up the uniformity of the hair and then fine hair strands, such as strays and flyaway hairs.
I then use XGen’s modifiers: clump, noise, and cut to further break the uniformity. The clump modifier is particularly powerful since real hair does present itself in clumps, so it’s important to recognize this when grooming realistic hair.
Furthermore, I always paint masks for various attributes; masks allow you to control exactly how much hair you want and where you want it to grow from. I’ll link a great resource on how to do this, as well as applying expressions in XGen for more control over how the hair looks.
Working on the Eyes
My method for building eyes is by using separate geometries for the sclera and iris. I usually use the geometries and textures provided by Wikihuman’s Emily Project for my personal work. I learned how to shade eyes using Arvid Schneider’s tutorial on YouTube, which I’ll link below.
The more I’ve studied eyes, the more complex they seem to me. One thing I’ve noticed that people get wrong with eyes (as I also did) is the blend between the iris and the sclera. It’s either too blurred or too sharp, usually the latter. From studying references of eyes: up close the transition is really smooth, but as you get further away the transition feels sharper, of course. If the edge of the iris is sharp from up-close, then most likely from a normal distance the eyes feel cartoony and unrealistic.
Additionally, iris size is a common mistake I’ve seen – it really makes a huge difference to your character’s appearance and can tip the scales into the uncanny valley. Measure the proportion of the eye that is taken up by the iris in your reference images and replicate that in your texture maps. The more you observe and study the real world, the more it will reflect in the quality of your work.
Building the Skin
My methods for creating realistic skin has evolved over time, but one thing has stayed pretty constant and that is that I’ve always used Texturing XYZ’s scanned data maps. The most recent approach to creating skin uses their multi-channel pack where I carried out a channel-packing method. This involved adding a displacement, specular and haemoglobin map into the Red, Green and Blue channels of a file to produce a green, weird-looking map that would then be projected onto the model in Mari. This was then separated back out and applied accordingly. You can discover this workflow in-depth on the Texturing XYZ site.
Furthermore, shading skin in Arnold can be a tough feat, it involves painting additional texture maps such as subsurface weight, specular roughness, and even coat weight and roughness maps for representing the oilier zones of the face. Look-dev is a process that requires a lot of patience and experimentation. I like to adopt an efficient Photoshop-to-Maya workflow, whereby I can make a small change to a texture map in Photoshop, save it and refresh the Arnold renderer to see the updated results.
To build the final render, I usually use a lighting scenario as a reference, such as a screenshot from a film. For the Eli render on my ArtStation, I used a Star Trek poster as a reference and tried to recreate the light set-up using area lights and HDRI lights in Arnold. Lighting is definitely one of the most important aspects of any portrait or scene since it can either make or break your work. I’m always watching demonstrations on YouTube on how photographers set up lights to light a subject and how different rigs create different moods. Panocapture has some great HDRI’s that can instantly situate your character in a real-world environment – this is also a great way to test your materials. As soon as I started doing this, I really began to push skin shading to the next level, since it forces you to fine-tune values and repaints maps, so the light reacts with the subject’s skin more accurately.
Additionally, I always composite my final renders in Nuke to grade and color correct my final renders as well as the AOV passes, such as specularity or subsurface scattering.
My advice to all aspiring character artist’s is to practice, be patient and persevere. Also, observe how Artist’s are creating their pieces: the techniques they use, etc. and learn from this. Experiment with your own techniques too – the fun comes from the learning.
Here’s an image of where I was 2 years ago and where my work stands now. This is nothing more than hard work. If I can do it, anyone can.