Coss Mousikides shared the details of the Synthetic project made with Maya, XGen, and Arnold: character production, hair, skin, clothes, lighting.
I originally come from Athens, Greece, and I have been a Character Artist for the game industry since 2013. My first career steps where after graduation from Staffordshire University, UK, where I studied Digital Film and 3D Animation Technology.
81st North Startup
We have recently launched 81st North along with my long term colleague Madina Chionidi and a number of other young, very talented upcoming artists. As a fairly fresh startup company, we aim for specialized character work and have so far have collaborated with a variety of industries including videogames, illustration and even the fashion industry.
We provide high and low poly 3D character modeling, character rigging, 3D illustration, realistic cloth simulation, and expanding to concept art soon.
I would love to discuss our current project as we as a team are stepping into rather uncharted territory and it involves cloth simulation and AR integration. Unfortunately, it is rather difficult to go into detail due to current Non Disclosure Agreements we have signed with our clients.
At the moment, we are working hard to launch our official website which will showcase our work.
It’s quite common nowadays to find pieces that demonstrate the technical abilities of artists and the bar is undeniably very high. I find there is a lack of storytelling in simple T-posed meshes or portrait close-ups and I wanted to see what I could do to circumvent that trend. The project was intended as a promotion for 81st North and the goal was to explore how a technically complex piece can be viewed in the context of an emotional cinematic scene.
I was interested to present not just the characters themselves but also the situation they find themselves in. Poses, expressions, compositing, and camera angles are great storytelling vehicles and I find these elements are often overlooked by 3D Artists.
Some technical parts which I personally wanted to test such as hair creation and liquid/cloth simulation where specifically challenging as these fields are unique in their own right and require a lot of trial and error in order to work convincingly.
Realflow was used to simulate liquid on one of the charactersЖ
Once the proportions and basic forms are satisfactory I proceeded to sculpt the facial expressions accordingly and pose them.
As there were no scans involved, I relied heavily on photos of similar real-life situations (grief for the loss of loved ones, intense sadness, etc.). This is particularly important for the sculpting of the expressions as we are hardwired to recognized nuances of these emotions on human faces. Small adjustments such as how far up the chin muscles are strained or the angle of the eyebrows can make or break the believability.
All the hair was created in XGen (Maya) which is a fantastic tool.
I personally prefer to place individual guides by hand instead of grooming splines because it allows for more nuanced control. The woman’s entire hair system alone has 5 subsystems (called descriptions in XGen).
Each description has its own stack of modifiers to clump, randomize length and add random low and high-frequency noise to the strands. Almost every mid to long hair description has these modifiers but with different setting variations.
I found myself relying often on black and white clamp masks in order to control both the amount and the location of the clamps. Here is where noise expressions plugged into the clamp parameter are doing a great job.
A very nice trick I had to find out the hard way but might save some time to other users of XGen is that in order to control parting hair more efficiently one is better off with having each parting direction on a separate description instead of trying to paint region masks. I find that region masks do not provide detailed control in such a case.
For the eyebrows, I first textured the head and figured out where the position should be to accommodate the expression. Finally, I used Xgen to place guides one by one while using the previously projected eye texture as a reference. In order to have each hair strand of the eyebrow appear exactly at the placed location, the Generate primitive option in XGen’s Primitives tab was set to the “at guide locations” option.
For the shading of the hair of the child, I used XGen’s default hairPhysicalShader with slight color tweaks.
To achieve a more diverse look for the female character’s hair I used Anders Langland’s AlHair Shader and controlled the hair melanin variations through ramps. There is a fantastic tutorial by Arvid Schneider on how to achieve this effect (see below).
The skin setup is relatively basic and it uses primarily one color map and a displacement map exported from Zbrush. Anders Langland’s Al_Surface shader is what is used on all the skin parts and AlRemapColor and Multiply Divide Nodes are used on the color texture in order to produce the Epidermal, Dermal and Subcutaneous layers. In this sequence, they are fed into the 1st/2nd and 3rd depth level of the SSS section of the shader. A remapValue node was used to tweak the contrast of the Roughness map so that it works better with the lighting. In order to break up the skin specularity a bit further, a tillable skin normal map was added with a “Repeat UV” setting of 23.
SSS tends to wash out a lot of the small color variations of the pores and freckles. In order to combat this, the initial color map was desaturated, high passed and blended onto the colormap using the soft light blending mode. This was done in Photoshop.
To strengthen the translucency effect of the lips and ear I used a variation of the final render which can be achieved in Photoshop if one takes the final 32bit render and switches it to 16bit with the method set to “Local Adaption” and the settings left at default.
This produces an overly contrasty image with vibrant colors which one can later mask in selectively onto the final render to intensify the translucency effect. Incidentally, this can be similarly used to make the skin details “pop” even more.
All the clothing base was done in Marvelous Designer and later thickened in Maya and detailed in ZBrush.
Every clothes piece was initially constructed on the T-pose mesh of each character. Then the final posed mesh was imported in MD in order for the cloth to be simulated into the final position. This can be done by going to file->import->Obj->Load as morph target.
I have as an unorthodox habit to initially test out lighting in Maya viewport to get the feel of the scene.
With the AA turned on and the shadows enabled one can get good enough quality to gauge which type of lighting works for specific camera angles.
I used Arnold (mtoA 1.3) to render the final shots. The lighting setup is a blend of HDRI, Area lights and aiPhotometric lights. In addition, a text mesh was used with Arnold’s mesh_light turned on in order to simulate the red neon light that is seen next to the female character.
The HDRI was used to both give a nighttime type of lighting setup to be used as a base for the rest of the lighting and serve as a backdrop. Small area lights were used to fake the backdrop color light reflections on the pavement and to illuminate the wall. The most fun part was to try different .ies profiles in order to achieve a believable streetlight effect that would be used as primary illumination of the characters.
The .ies profiles are files that are used by Arnold’s aiPhotometric lights in order to simulate real-life distribution. The photometric data is accurate because it is based on the lamp manufacturer’s actual specifications and it is an easy way to get realistic scattering and falloff effects.
There are plenty of .ies databases available online but I got mine from here.
MtoA official documentation provides some examples of .ies and how they can be used in a scene:
Rendering in Arnold
The main reasons to choose Arnold over other renderers for this project was mainly because, at the time of production, it seemed that the quality of hair rendering was more realistic and that its native integration with Maya was stable.
At the end of the day, I am of the opinion that it comes down to visual output preference and ease of use. These elements can vary widely among artists themselves and depends on the style that each artist adheres to.