Thanks, Allar! Good luck with your new project!
Who just carries around $250.000 worth of files on a portable hardrive without any backups.. The bug is stupid, but this guy is a moron.
Michael Allar here. Thanks a bunch for posting this, I really appreciate it. I'm also the guy who wrote that Confessions article that was posted here on 80.lvl as well.
Ben Bickle detailed the modeling workflow, which is guided by the TurboSquid’s StemCell requirements. In the end, you get two models for show and for games. The design is fully functional and holds a lot of surprises.
I’m an Associate Producer at TurboSquid, and one of my roles is as a resource for artists looking to branch into Real-Time. I never appreciated how different models made for Film and Television (referred to as a DCC workflow) is from making models for Real-Time applications like games. Traditionally, artists start and master one type of workflow, and the nuances involved in switching workflows can mean relearning and rethinking how to do even basic techniques.
Recently, the company introduced a new modeling standard called StemCell, that aims to satisfy both audiences. As an artist trying to relay workflow concepts to other artists, I feel that actually putting myself through the paces of what I’m explaining to others is critical to doing my job well. Although the standard is a straightforward extension of good modeling and texturing practices most people will be familiar with, you can’t get a full appreciation for a spec like this until you actually follow it yourself.
Coming from a gamedev modeling background, these are the bullet-point changes in workflow from a normal model:
The final model must turbosmooth. This is the biggest challenge, both in change of thought process and in a literal sense. Leaving in control loops and keeping poly flow intact to hold its shape with turbosmooth means that the polycount is going to be high for the final shape.
The model needs spec/gloss and PBR Metalness workflow textures. Right now, this means texturing it twice. For this project, I used the Quixel Suite to author my metalness maps and manually converted to spec/gloss.
(For now) the model can’t have any rigging or skinning. As long as I keep things descriptively named and in a hierarchy, this isn’t that much of a problem.
A full list of line by line specifications can be found on the TurboSquid training site here.
So with that out of the way, let’s dig in!
When it comes to 3D I do in my free time, I’m more on the side of making things that don’t exist, since so many artists seem to have that whole ‘recreating things that do exist’ covered and mastered. With the broad ‘make a thing that doesn’t exist’ mandate, I dug in.
Have you played Robo Recall? It’s an awesome game. It has awesome art. The guns in particular feel solid both in 3D and VR, and have a nice but not overstated Sci-Fi feel to them. I really want to make art at that level, so this project is a bit of a moonshot with those as a visual target.
The NAC-22, as a design, actually has roots in a much different game. Due Process (the game I’m doing environment art on) is going to have a wide assortment of excellent grounded cyberpunk firearms. In testing, one of my favorites is the NAC-11, a play on the MAC-10/11 line, which is a full auto recoil monster of a machine pistol. When the random crate drops generate one of those babies, I make sure to grab it.
One of my coworkers did a couple dozen thumbnail silhouettes that were eventually whittled down to the NAC-11. With permission, I asked to use a mishmash of a few to make its slightly more sci-fi big brother, the NAC-22.
I did a silhouette and a really nasty photo composite using elements of a few SMGs and machine pistols, plus a few design elements from the Robo Recall guns.
With that in hand, I had enough to start modeling!
Modeling – Project Scope
Regardless of if the model exists in the real world or not, I do try to make it mechanically plausible. Guns in particular fall under deep scrutiny as 3D models, and there is a wealth of reference to turn to in order to understand their function. I’m a fan of the Forgotten Weapons channel on youtube; gun disassembly is fascinating to watch. Regardless of your stance on their use, firearms are a precision mechanical marvel that have had countless takes and variants from being iterated on for centuries and across cultures.
My last major weapons project, at least had a basic receiver that lined up with the magazine and barrel, and enough interior for functional third person animations – a situation where it’s open for a few frames of a fire animation, but nothing more. My gun before that was a pistol built out around the .40SW bullet. One peeve from my .40SW pistol was that when the receiver was open, you get a nice view of some back faces and mesh clipping.
Although this project wasn’t intended to be at the level of those insanely accurate Escape from Tarkov firearm models, I at least want it to make some physical sense. I also found some interesting assembly diagrams and parts pictures of the Mac-10.
I liked the challenge of the semi-modular designs of the guns in Fallout 4, and Robo Recall implements that to some extent with an unlockable weapon attachment system. With that in mind, I wanted to model major components of the interior and make sure the attachments were able to be removed, but stop short of every spring and guide rod a normal weapon has. I’m not a weaponsmith, after all!
Modeling – Blockout to Done & StemCell Considerations
With concept in hand, I started modeling the weapon, starting with the 9mm round, blockout, and building detail out.
Even at early blockout, I found that gun without the compensator attached looked quite compelling, and later I found that the inner compensator block had a better silhouette than the outer block. Given the modular nature of the design, I didn’t really lose work with this shift in focus. I modeled and textured for all three variants, and did most of the promos using the inner block.¶
The first major workflow change for StemCell was in this stage. In order to keep the model subdivision ready, I needed to build control loops into the final mesh, something that never makes to production for a real-time model. In game development, the general rule of thumb for using subdiv & control loops is that the high poly only exists as a source to bake from – it can generally have some pretty nasty geometry or even be sculpted so long as it bakes well to a low-poly. This focus on cleanliness meant that I essentially created a modern ‘midpoly’ model, in that all smoothing is controlled a single group, and used custom edge weighting on the base geometry to retain the smoothing you’d expect from a high poly. This technique was used extensively for the hard surface models in Alien: Isolation, and, with the change in hardware requirements, is finding its way to more and more workflows, albeit not at this level of detail.
The model as a whole was done traditionally in 3DS Max, but I did use ZBrush to tweak and clean up the topology of the handle. Being locked into a workflow that required subdivision surfaces, I didn’t want to lock my high poly into a sculpt and bake it down.
Modeling – Unwrapping UVs
Once the mesh was done, it was time to unwrap the model. I still like to rely on old school unwrapping methods, preferring to manually create and place shells. I know from the onset that I wouldn’t be placing every object on one UV space, so I decided to split it off first by removable attachments and second by visibility. For example, the main body components that would never be toggled are on two UV texture spaces, but the second texture is mapped entirely to interior components. This means that for most use cases, the interior texture can be downscaled or completely omitted without impacting the available angles for first or third person animations.
For the most part, unwrapping was straightforward, it just involved more faces. Because I usually work with very low poly models, I’ve developed a workflow where I split all edges and individually relax faces to eliminate stretching. I then edge stitch them back together face by face, creating complex islands with no or intentional stretching that are optimized for post work & decaling in photoshop. Here, I had to be a bit more careful with what I split up, or it would have taken me ages to stitch everything back together.
I also made sure that I split my islands based off of theoretical smoothing groups – if I had a low and were baking normals, these splits would ensure that there would be no visible edge artifacting caused by baked normals.
The render scale of each UV was decided by the relative pixel density, with the exception of interior components. Despite being spread out over 5 textures, I was quite happy with the cohesive density I ended up with – 4k main texture, 2k compensator, 2k magazine, 2k interior, and 1k wood grip handle.
Unwrapping for global UV density
Modeling – Balancing Details
Another balancing act I had to play was with the fine detail. Although the larger shapes obviously turbosmoothed well and would provide those desired soft edges for offline rendering, I still leveraged floating geo to bake down finer details like screws and holes I wasn’t planning to model in. With normals and parallax for real-time and texture displacement for DCC, I could still retain the micro-detail at the end renderer.
Even on the high poly, these details are only expressed in texture, which means that not all of it has to be modeled. Admittedly, I’m not fast enough at ZBrush to throw the model in that app and stamp alphas in, so for the more complex shapes and things that needed to be geometrically accurate (matching screws and holes) I modeled some objects in max and moved them in place. I also modeled the detail on the bolt grip and magazine release, and added threading for the barrel and upper receiver using a tightly coiled 3DS Max ‘spring’ path and converted it to geometry.
For smaller details, I did the oldest of old fashioned – shapes, paths, and text in photoshop, placed after I completed the UV & bake. Bottom line is that for the most part, I chose what technique offered up the path of least resistance for each detail.
Another decision I made at this point was to create a true ‘low-poly’ model to try and hit cost budgets I’d normally have if I was purely making this for real-time. The challenge was optimizing the mesh without damaging the UVs, which can be a bit of a nightmare. It feels very similar to a game of Jenga, where you’re slowly removing blocks while still hoping the structure won’t fall. Most of the geo I removed was straightforward, holding edges could simply be selected and deleted. Things got trickier where I’d end up with non-silhouette edges that can be collapsed into other edges. Normally, if this was pre-UV, I’d target or group weld vertices, delete partial loops and create cuts to split up n-gons, but with UVs in place, welding (or even translating) vertices may or may not break your UVs, regardless of whether or not you’ve told the modifier to preserve them. Most of the edge removal came down to creating a cut to the nearest silhouette edge and deleting the now unneeded edges. It was a slow and tedious process, but at the end of the day, I had two versions of the mesh that shared textures. Better yet, I already had the UV islands split by smoothing groups on the low poly version. That said, decimation tools like Simplygon make this more of a theoretical problem then a real issue.
Texturing – Baking and Prep
A project with this many separate subobjects can get to be very difficult to organize for a bake. My last project had about this many subobjects, and that took a solid weekend – about 12 hours – to explode, bake, splice, and clean. That was before Marmoset Toolbag 3’s texture baking and bake groups became a thing, though. This is the first ‘real big’ bake I’ve done since I’ve adopted Marmoset’s baker. Instead of ‘bake day’ being a mindless roadblock requiring a ballet of synchronized subobject movement that ended with fingers crossed and potentially hours wasted on mistakes, I was able to throw in the two versions of the entire model in one file, split my bake groups out in app, and let it run. Even working across 5 textures, everything took a few hours at best, and generated consistent results that I was able to iterate on in real time when there were minor issues.
My initial bakes didn’t have any floating geo details either. I did have to redo the set for those objects that were affected, but even that took minutes since all of my settings were saved from the previous bakes.
With my baking done, I finished out my pre-material base textures with details made in photoshop. This is an older process that still proves very effective – make a shape or text expressed as a heightmap, throw the custom heightmap into a normal generator, and merge the generated normals.
I’ve also refined this process since the last time I used it to get more accurate results in my texture generator suite. For textured details to look convincing as geometry, they need to show up in four maps:
The Height map: This is where I start – It’s just a greyscale map of details that I can collapse and feed to crazybump. I also use it + the cleaned heightmap from the highpoly + floating geo bakes for a final heightmap that can be used for parallax or displacement.
The Normal map: Once I have a heightmap, I feed it to Crazybump (Although, if the height is well made, you can use xNormal, nDo, the nvidia photoshop plugin, or a bevvy of other options). I prefer Crazybump because I can tweak the curvature and broader effect it has on surface in a way that I know quite well. I did find out in this project that CB caps its output at 2k and will just upscale to 4k if you export it at that resolution, so for the main texture, I was limited to using xNormal’s plugin. Once I get the normal how I like it, I bring it back into photoshop and tweak the blue channel for a hard light mix.
Tip: if you ever want to combine two normalmaps, you need to do a bit more than just selecting one and doing an overlay or hard light on the other – the blue channel of the overlaying map needs to be darkened by half to make the neutral color true grey (808080). Otherwise, you’ll lose the blue channel data, which some 3D packages rely on for additional tangent offset data.
The AO map: with the height and normals out of the way, I also bake the shapes into the AO map. You could take your generated normalmap and use a normal -> AO converter to generate the AO, but I prefer using drop and inner shadow controls in photoshop since I already have shapes as clean layers. It gives me an extra degree of control in defining the look of the shapes.
The Curvature map: This last map in incredibly important to all modern texture generators, as it essentially tells the tool where the hard and soft edges and cavities are. These are used for things like dirt buildup and edge scratches. For this project, because I was using crazybump, I switched to its displacement tab and cranked the ‘highlight edges’ to max and adjusted the brightness until I got near true grey. Form there, I brought it into photoshop and equalized the neutral grey to 808080 and did a hard light mix on my baked curvature from Marmoset to get my final curvature.
By taking the time to add textured detail to all of my baked maps, generating ‘modeled’ details in photoshop is as effective as actually modeling them into the highpoly mesh, and done in a fraction of the time. Plus, if I want to isolate these shapes on my ID map, they’re there and easily editable.
For example, the grip pattern was effortlessly generated with a series of rounded rectangle shapes in photoshop. Instead of carving them out in a sculpting tool, I was able to edit path control points to fine tune the length of each line allowing me to precisely follow the grip’s shape based off of lines generated on the UV map.
I’m not going to say that this workflow is definitively better or worse then modeling these details on an ultra-high resolution mesh in a sculpting app, but it definitely allows me to leverage the experience I have with this particular workflow, even if it’s not the newest set of tricks in the book.
At this point, I also made a logo for the gun and company. Below is a sheet of concepts, mostly to hammer out what typeface I wanted to use for the stamps and decals on the weapon itself.
Texturing – Working with Quixel
With my bakes done and augmented with textured details, I was finally to throw the model into the texture generator of my choice. I’m still a sucker for Quixel, I know it well enough to avoid its pitfalls and play to its strengths.
I did an initial material definition pass in dDo, getting the basic reflective values from their libraries. Although my final product looked much different, starting physically accurate kept my materials grounded while working my way to the subtle stylization needed to make the textures look good.
I used a variety of techniques to generate the scrapes, scratches, and grime on the gun. Starting with the results vanilla dDo generated, I cleaned up seam mismatches and signs of obvious computer generation. For the global scraping, I have a set of wide surface scrape brushes in photoshop that I applied over the texture map in 2D and imported into dDo as a custom mask. Other shapes that required a more human touch (like scraping from the back handle on the circular end of the gun) were drawn in manually with a thin photoshop brush and a bit of patience.
The finish I chose is known as blueing. This can produce near black to very blue results, and can be completely smooth or very rough depending on the technique, chemicals, and type of cloth or pad that was used to remove them. Blueing can also wear off during normal use, leaving unfinished metal along the sharper edges of the firearm.
Components, both interior and exterior, were aged and worn to in consideration of the type of contact it would receive with normal use. The magazine and receiver both have pronounced scrape marks from constant or forceful metal-on-metal sliding. Interior components have an oil pass the exterior does not. Even if the effect is minimal, it’s important to tell a story with the textures.
Texturing – StemCell Considerations
One major push StemCell has strived for is the expansion of metalness as a greyscale map. It is a common misconception that the metalness map must be black and white. The resources page on texturing goes in depth on the research behind it. After working with StemCell Spec models and materials and seeing proof of expanding metalness out from the poles, I’m a believer. At the very least, metalness at 100 or 0 percent can cause slight imperfections going from package to package and 95-5 can be more reliable across the board. It starts to fall off and go into weird non-surface territory past the 75-25 range. That said, that’s a wide gap of valid metalness values that aren’t pure white or black that really should be explored and tested.
Knowing ahead of time that I would be converting from one material format to another (Metalness <-> Spec/Gloss), I made a conscious decision to start with the metalness workflow; there’s a few reasons to choose this authoring path.
There is no metalness analogue in the spec-gloss workflow. In order to convert from one to another manually, you need to know beforehand which surfaces will end up with metallic properties and which will be non-metallic. You can get close if you manually inspect the specular map, (and as the artist, you know what the surface should be), but going from Metal -> Spec/Gloss, you can automate this process with something as simple as a photoshop action script.
Spec/Gloss is technically more powerful than metal/rough. There are surfaces you can make in a spec/gloss workflow that cannot be easily converted to metalness maps. There’s an argument to be made that this isn’t a ‘good thing’ as those surfaces don’t often have real world counterparts, but the most realistic example I can think of is in complex metallic surfaces where there may be layers of dirt or grime that will launch you into the literal grey zone on a metalness map. The closer you get to true grey on a metalness map, the more likely engine to engine parity may not be guaranteed. Working the other way around ensures that the surface you authored can be converted reliably as it essentially carries less data.
Converting RT maps to DCC was easy, it’s just a few automated steps in Photoshop.
Starting out, you invert the Roughness map and save it as a Gloss map.
Next step is to generate the Colored Specular. You’ll need your BaseColor and Metalness maps:
Create a fill layer of #282828 (RGB 40/40/40)
Mask this fill layer on top of your BaseColor with the inverse of your metalness map.
Duplicate this masked fill layer and change the blend mode from ‘Normal’ to ‘Saturation’. This eliminates colored specular on dielectric surfaces that might have a slight mix.
Save out your Specular texture. If you’re a Layer Comps guy like myself, save this as a layer comp.
Using your previous working document, it’s time to make your Diffuse.
Duplicate the masked fill layer. Invert your metalness mask again (metal surfaces should be white on the mask). Change the fill color to black (#000000).
Hide the other layers you created for the Specular Map. You should see dielectric surfaces like wood and paint match the base color, and metallic surfaces like bronze and steel be black or near black. If you’re using the previously discussed ‘metalness as gray scale map’ technique, you’ll still have some minor detail and color in the metallic areas, which is much more indicative of a traditional Diffuse.¶Save your Diffuse texture.
Save your BaseColor.
That’s pretty much it! Normals are universal (provided you don’t mind flipping green for some applications), as are height/displacement maps and AO. You may want to tweak your converted maps with levels and possibly adding some detail to the dielectric surfaces, but the above can easily be saved out as photoshop actionscript and pretty much give you the same visual result in engines that support both workflows, like Unity and Marmoset.
Baking on this project was a bit more complicated than a normal single endpoint since I had two valid ‘low-poly’ versions. This goes a bit beyond the spec, but I really wanted to see this through, for research sake. The normals were authored for two workflows:
DCC: Normals retain no high poly geo data – the edges can be turbosmoothed with no edge overcompensation from a texture. The only data on the normalmap is from the texture (scratches, dirt buildup, surface detail) and normals coming from the floating geo as well as textured on details. Both of these larger normalmap shapes are enhanced with displacement maps.
RT: Normals store a great deal of edge information. Bakes were generated from a highpoly (mid + two iteration turbosmooth) and included floating geometry. Additional textured normals were overlayed in photoshop. These normals were used to make the textures in dDo, and surface level detail was added in the app.
To convert the RT normals to DCC, I had to isolate the baked floating geo (a bit of selection path work in photoshop) and the textured normals (already isolated) to make an intermediate normal with no texture detail. I then took this texture and slipped it into the bottom of the photoshop document for the normals coming from dDo. With that all said and done, the end results were quite solid, allowing me to apply specific normals to any version of the model with no fear of edge seams or incorrect smoothing.
In the above shot, the left version is fully RT – Basecolor, Metalness, Roughness, baked Normals. The right version is using the DCC workflow maps and mesh – Midpoly using converted normals, Diffuse, Specular, Gloss. Although not ‘pixel perfect’, material definition and intent is clearly preserved. If anything, the derivative spec/gloss version has more dynamic range to work with.
I used Marmoset for texture development and validation. Although I used 3Do while making the textures, I did a good bit of manual tweaking after the texture was ‘done’. I also opted to use Marmoset as the baseline for the Spec-Gloss conversion.
I wanted the final render to be something a little more unique than the average ‘gun in a dark void’ though, and wanted to express a little environmental storytelling. To that end, I modeled a few additional items for the scene that really helped make it stand out.
There’s a certain way that objects are photographed in criminal investigations, and then there’s almost ‘presentation’ like displays police like to make for the media when they have a big weapons bust. It’s almost like a trophy – you see things like zip ties forcing the weapon’s bolt open, with magazines and even individual bullets neatly lined up for additional effect. All that work usually photographed with a cheap camera and nasty flash.
To execute this idea, I modeled a few additional props for my ‘big bust’.
I always model the ammunition the weapons take, doubly so when I want to make sure it fits where it’s supposed to. I’m slowly building a library of rounds that I should be able to reference for future projects. Separating out the bullet from the shell allowed me some options when adding details to the scene.
The following models were all made using dDo and the same general workflow as above, just with less focus on poly density and no spec-gloss conversions.
After placing some ammo in the scene, I felt I need a bit more – I made an aged texture for the brass and modeled a plastic evidence bag around the two spent cartridges. I had a bit of fun filling out an evidence report as well.
I made use of one of my logos by making a spiral bound notebook. I took some inspiration from the MAC-10 operator’s manual and tried to match its military-esque, no nonsense visual design. I even printed out the page and scanned it back in to capture some natural printing errors.
I also needed to put it on a surface. Although a plane with a wood texture on it would have sufficed, I opted to model a simple folding table. This meant that the voxel GI had a real world surface for proper reflections. Although the effect is near imperceptible in the end render, I can imagine that I’ll get more mileage down the road with this model. I added some coffee rings and stains as well – small details like that gave me some options for interesting surface details for my backplate.
The Coup De Grâce was a set of zip ties to lock the bolt open and the trigger forward. I used a few splines to define the base shape and box modeled the cap and end piece and the tag. It added a nice pinch of color and some much needed flavor to the scene. I kept the tag seperate so I could get the perfect position in Toolbag.
Coming together, I made a very analytical evidence shot of all of the objects, complete with a flash as well as an aspect ratio and filmic effects matching a low quality 35mm film.
Although the evidence shots captured the look I wanted, it wasn’t great for thumbnails, and could use a more visually attractive composition and lighting.
The final shot brought in all the minor elements I did for the breakdown, but put them in a more coherent layout with better lighting and more flattering angles.
Bridging the Gap
With my key real-time renders done, I used this project to explore the idea of bringing the model into V-Ray. Coming from a background of self-taught 3D by scrapping it out in the modding scene, I’ve never had a reason to use an offline renderer for anything; the whole process of high end rendering and modeling is quite new to me. I occasionally render out quick scanline shots to preview a model during the modeling and early texturing phases, but I’ve never done finishing work in a DCC app. With this project, I had an opportunity to explore the industry standard because all the components I needed were already set up and ready as a byproduct of the workflow.
A few shots after dipping my toe into VRay
Having a formula for implementing the PBR textures I’ve already made with VRay materials was incredibly helpful. Adding a Turbosmooth pass to take advantage of all of those control loops was incredibly gratifying as well – It was neat to see faceting on cylinders just disappear.
From there, it was just a question of learning the Physical Camera system and finding a few interesting angles. I’m glad I picked up photography as a hobby a few years ago – understanding how lens settings impact depth of field and perspective distortion is a critical skill in any workflow.
With the project done, I feel I’ve got a good grip on this workflow. It was a long journey that took me out of my comfort zone, but I believe that it was time well spent. I feel comfortable with the spec and could definitely work with other artists to help them understand it, and I have a nice piece of art as a byproduct. Better yet, that model can go into just about anything, if needed.
StemCell is a complex spec to meet – it has high production standards, needing to consider both mesh optimization needed for Real-Time and the end visual quality expected in photorealistic DCC products. That extra effort pays off, though. Thanks to the work done as part of the spec, I can tweak a few values and seamlessly put it in anything from VR to a full film production. At the end of the day, having a model that ‘just works’ for just about everything is worth the effort needed to make it.