Alexandre Collonge gave a detailed talk on how he created and rendered his 3D Hellboy.
Alexandre Collonge gave a detailed talk on how he created and rendered his 3D Hellboy.
Hey there, my name is Alexandre Collonge, I’m 35 and i’m working as 3D teacher & pedagogy coordinator in «Ecoles Aries» in Lyon, a school of Graphic design, 3D design, 3D animation and motion design in France (we have openings for foreign students and/or exchange).
I’m mostly interested in character art & design, digital sculpting, modeling, shading, lighting and rigging, aiming to bring life to personal projects or to give my own version of an already existing universe (like in this particular case).
At first, all of my models and artworks are created to improve my skills, develop new (or test existing) techniques and then share the result to my students, as I usually use those to explain my workflow and techniques during my courses. Of course, I won’t lie to you, I love sharing my work on the internet and like every artist in the world, I always hope someone will put a « like » on it. It’s a small thing but every artist as a big ego, so we are always happy when someone like what we do.
About my interest in 3D Character Art… It came naturally, since I started studying art. Being a huge fan of comics and animation, movies, etc… All the time I was focusing on the characters, their background, shapes, designs, accessories. So I kept working on this specialization to improve more and more, hoping someday it would pay off! But to be honest, the comics « Invader Zim » from Jhonen Vasquez was the real starting point.
You can follow me on social medias here:
There area lot of things to take into account. It’s a bit like a shopping list: You start with an analysis of the overall style, then shapes, colors, lighting and finally this little thing that works in 2D and that is extremely difficult to recreate in 3D … this will be the part where you’ll be spending the most time, mostly in the early stage of blocking and posing.
There is a few to major rules to respect:
1 – Take a lot of time to create board of references of the subject you’re working on and analyse them.
2 – Focus on the posing and shapes of the original artwork and work fast with primitives and blocking.
3- Do not be afraid to start again (early) and criticize your own work and workflow.
Left and middle pictures: Hellboy – Mike Mignola / Dark Horse Comics /
Right picture: Right hand of doom from Brashsculptor
Combining Zbrush and Maya
One of the most important thing here is to use the best software for the best situation: to be honest, since I’m a teacher, I don’t have that much time to work on personal stuff, so I have to be fast and efficient in what I do … So I tend to teach quick and comprehensive workflow between software based on my own experiences.
The second thing is to create shortcuts and personal UI, to get faster, reduce click numbers and access quickly to your favorites tools. For 90% of my models, I start directly in Zbrush trough blocking allowing me to get major shapes without having to bother about topology or anything technical.
Below you’ll see some video examples of an incoming artwork, the first one is about the head: I start with a Sphere, transform it into Dynamesh and start cutting and sculpting a global shape. After a while, i’ll start adding generic face details to get « something-kind-of-human » … you have to keep in mind that your model will be kind of ugly for a while, it’s normal. It the goes on until a clean up the face and get everything like I want it to. I finish with the hair, based on a roughly sculpted extract from the head on which I’ll be placing Hair curves (I use a lot Sakaki Hair brush).
For the rest of the body, I don’t really use Zsphere, I’m not a huge fan of those, instead I work with base models composed of primitives I placed and sculpt roughly (male and female)
Most of the time, I’ll be using Zbrush for organic and more detail demanding objects. On the other hand, I’ll be using Maya for assets, objects, cloth simulations, etc … Keep in mind that I’m using the software I’m comfortable with, which is the most important part about a workflow, it will allow you to work faster.
When working with Maya and Zbrush, I’m trying to anticipate what I’ll need to do later (retopology or not, Uv’s, projections, baking, etc …) so I try to work a lot with «Polygroups». These will allow me to create easy posing, Retopology and UV base unwrapping I’ll improve directly in Maya. You can see an example video below for the posing part.
After the posing, base topology and UV’s, I can start refining the sculpt.
A word about Topology:
Retopology is time consuming by default so I only do manual topology if I need a really clean mesh on specific parts OR if it’s destined to be animated. Otherwise the solution is the Zremesh function of Zbrush, with the help of guides, polygroups and the « Alt » button giving various results
Keep in mind that Zremesh most common default is it’s inability to create clean loops where you really need them. It will often give you spiral shapes, which doesn’t help UV unwrapping.
About the cartoonish look in my artworks and my painting process, this is something people ask me a lot, mostly about the « substance » workflow I’m using to get cartoon or «hand-painted» texture. To be really honest, I almost don’t paint anything.
The first thing is to get the maximum details in Zbrush on a really high subivided model. For Hellboy, in addition of traditional sculpting techniques, I also created my own alpha maps to project « cuts » on my mesh. The easiest way is to use Zbrush to sculpt whatever you want on a 3D plane using Polygroups, render It inside Zbrush and extracting Depth and/or Occlusion maps to use them as alpha on your Mesh. You can see the process below :
Final sculpt clay Render.
With the sculpt finalized, the idea is to project the detail information on a mesh with a clean topology and Unwrapped UVs, either in Zbrush itself or in Substance Painter, xNormal, etc. To speed things up and avoid crashing in your favorite software, you better decimate (with the decimation master plug) the mesh to keep the details but reduce the number of points and polygons Be careful not to lower to much the model, otherwise you’ll get triangles on your projections.
High detail mesh on the left in dynamesh, Decimated version on the right for an easier projection.
Retopology was done on Topogun, but I usually do this phase directly in Maya.
The next step of my technique is based on Photoshop, using a set of maps I get from the previous 3D software: The base maps are a world space normal, a displace and a color map. You can add Occlusion, curvature or any map you like to the recipe, it will also work!
The point is to combine those Photoshop to obtain a «pre-lighted» texture by isolating the green channel of the world space normal, which is, by default the Y-Axis (so it makes sense), using «screen» or «incrustation» layer mode. You can then do the same with the other maps by playing with the layer modes and opacity.
This an exemple technique extracted from my «Fruit collector» model.
Full render of the textures with some tweaks on Substance painter (mainly roughness)
This is a basic technique I always teach to my students so they can get a really cool looking cartoon map really easily. So yes, Photoshop is still a center software for me!
Those maps are also my starting point in Substance Painter, as I use them in a «fill layer» and start working over those. This leads me to your next question where I’ll be able to explain more of my process
About Substance Painter, it is a really «cool trap» for students to get «crazy-PBR-looking-models»… looking like any other physical based models going out from this software if you’re not a wise enough. It’s not a bad thing, those are most of the time looking really cool, but you can use this software in other ways.
- The first rule is to use the provided shaders (substance or smart materials) only as a « base » or a « deconstructive logic » to analyze them and recreate them your own way… never as a « drop on ». Every Pro will be able to see a «pre-made» shader in one look. Well, you can always use them directly on a smaller object or a small texture part to gain time, but that’s all.
- The second rule is to use your own textures, prepared in photoshop (like my technique) or by using a substance designer workflow to get procedural shaders directly.
- The third rule is, from my point of view, never use paint layers, always fill layers. Because paint layers will « glue » every information channels you add at the moment you painted, and you can’t change them afterwards. The fill layer is just a color/texture filled layer … you can hide with a blake mask and then paint on the mask to reveal what you want. The point is that you will be able to change any values in the fill layer, add a new channel, or anything you want anytime. This makes a big difference and is a big trap for a student or a beginner.
- The final rule is to be creative and consider this soft as a tool, not a magical solution : As off right now, substance painter is not able to create real «hand-painted» textures like 3D-Coat but with the help of the previous texture technique I talked about and the use of curvature masks, you ‘ll get really cool results that will «look like» cartoon hand-painted techniques… almost without even painting anything.
This part leads me to your next question.
Graphic Book Look
For Hellboy, I didn’t use the baked maps techniques because of the original art refs. Instead, I started with a red fill layer with some roughness informations with fractal noise. After that I used a pile of layers with a darker gradient color, lava effects, white highlight effect and finally the «dark shadow effect» created using the «MG light mask» on a black fill layer. I just add to orient the light the way I wanted and corrected what was not correct with a paint modifier.
Keep in mind that I had to adjust this effect for every sets of the model, and sometimes double the effect in different directions and erasing the parts overlaping. That’s what I was talking about with the «creative rule» and considering the software as a tool…
For every artwork, I’ll work differently, with the same «base technique» but with variations and adaptations to the original art. It is the most important part when you are adapting a 2D visual in 3D and can be applied at every creation step of the character. That’s why I try to teach my student not to use Substance Painter only as a «Cool shiny PBR looking art»
With Sketchfab, as a real time renderer (on web) you need to optimize a few things like the number of vertices, not have to much separated objects, the texture size and the number of texture sets. If you have a complicated model with a lot of objects, don’t put one texture on one object, it will be slow as hell, you will have to work on each object at a time and you’ll end up with something like 300 hundreds textures. This rule also apply for precomputed renderer because the more you have polygons, objects and textures, the slower everything will render.
In the end you need to have an organized workflow where you think about every steps of production cleanly by asking yourself if your model will be animated or not, if you’ll use a real time or precomputed renderer, etc … Sometimes, decimated models with textures can work really well (if you’re lucky) but most of the time you need to be a bit clean with how you work.
Exemple of UV packing with different objects and parts of the character.
It doesn’t mean you need to topologize everything, or have clean UV’s everywhere because time is also a matter to take into account (Most of the time I have to «mess up» some parts of my workflow because of time, since I’m doing all of these models on my free time …) just don’t start out without any workflow planned or you’ll crash along the way.
Be aware that video-games, which are an extension of real-time renderer like Sketchfab, are even more demanding in terms of optimization.
For the renderers, I’m mainly using Renderman because it’s user-friendly, really powerful and free for personal use, students and teachers (which is really cool). Lighting is easy and you get quickly cool renders. The fact that there is only one main shader is also a plus, and I like the idea that almost every optimization settings are at the same place.
Light rig for my Hellboy and shading tree.
You get quickly good results even if you still need a lot of power and time to render everything. A lot of the work is also done in Post-production (which I do in After Effects for the time being, but I’m moving to Nuke now). After that I send everything to Photoshop to do micro corrections, and voila!
A set of Aov’s (render pass) I’m using for my compositing or presentations. They are not all here, but it gives you an idea!
It will be a bit depressing, but you can read as much as you want, you’ll just get a few tips here and here to add to your own recipes (Sadly, it will also be the case with this interview).
I have tons of books about artists, Artbooks from movies, games, animation, books about Zbrush or Maya workflow … I don’t use them that much. I do like them, read them from time to time, but it’s not where I got my techniques or experience.
The only useful books are those about anatomy (and not all of them). 3D Total are selling some great books about that, also really cool anatomic figures. The 2 best books about that are « Anatomy for the 3D artist » and «Anatomy for sculptors».
To be honest, the thing I use the most these days is Pinterest: it allows you to build folders with every image you saw that could help you or inspire you and access it everywhere.
Anyway, the truth is far more simple: Work, try, fail, start again, train, do it all over again and again. You’ll progress bit by bit, step by step. Books and tutorials are a good band aid, but only you, a teacher, instructor, in real-life or online can be a real help.
Never forget one thing: if you produce, you’ll improve. Don’t work only on school productions asked by your teacher, you need to work on your own projects, it’s vital to improve your skills.