Well that's a lot of hats !
So why not finish the project but making it super generic? Strip all star wars terms out. Then when the game is finished, allow for modders to make a conversion mod that will reinsert the star wars material? That way he can finish it and we all can get what we want and no one has to give up on their dreams.
When are you guys going to make God of War 5?
Glen Fox talked about some of the techniques and tools he uses to build outstanding stylized 3d dioramas.
I’m Glen, and I’m a Senior 3D artist at Space Ape Games, which is a mobile games studio based in London, UK. I grew up and went to University in the city of Lincoln, where I studied a course called ‘Games Production’. This touched on most aspects of games development but helped me decide to specialize in 3D art.
Post-University, I spent a huge amount of my free time on websites such as Polycount, making personal artwork, and creating a portfolio. Some of my early work was horrendous, as I found my way through learning new software packages, and new workflows, but through practice and persistence I got good enough to land a job in the games industry.
Starting out, I worked in AAA console, on titles such as Killzone (Shadowfall and Mercenary), Little Big Planet 2, and Ridge Racer: Unbounded. The company I was with at the time (Fireproof Games) then decided to start producing their own mobile games, so I spent the next few years working on The Room games (sadly nothing to do with Tommy Wiseau). It was a switch from long production cycles and working with big polygon/texture budgets, to short production cycles and working with stringent budgets (at the time we were developing for iPad2/iPhone4).
I work primarily in mobile games development, where hardware is less powerful and games are designed to be scalable to operate on many platforms. This requires tight budgets to be put in place, and we have to be mindful of where we use textures and vertices, in order to remain performant across these devices. I think creating low poly artwork comes more naturally to me because of this, and it’s something I carry over to my personal artwork. I suppose the method in which I share a lot of my artwork also encourages low poly thinking, as I use a real-time solution as opposed to renders.
Sketchfab is my go-to way of sharing most of the personal artworks nowadays. For those who don’t know what Sketchfab is, it’s a web-based 3D model viewer, which means it can be accessed on a variety of different platforms and hardware. I like to do entire 3D scenes on Sketchfab, but I still want the scenes to load fast, and run smoothly, which means that I have to use the techniques I learn from my day job to be stringent with budgets, and smart with how and where I use vertices and texture space.
In terms of my personal texture stylisation, I think it’s a case of emulating/taking inspiration from what I connect to when I’m looking at artwork myself. If you look through my Artstation likes, they’re pretty much all hand painted, low-poly models, and that’s definitely echoed in the kind of work I like to produce.
It’s also the type of texture style I want to learn and develop because I find it rich and multi-layered. I love watching people who are pros at hand-painting; layering in lighting, color, texture, and character in a single texture. I feel like I’ve only just scratched the surface, and there’s a high skill ceiling to this sort of texturing, one which I aim to reach someday.
Creating these dioramas is a very iterative process, where I spend a lot of time switching between Sketchfab and 3DS Max. They start out life as a series of simple grey meshes, to represent the major landmarks that I want to portray in the scene. These then get chiseled away and detailed, until you start seeing the the final form. All this time, I’m constantly moving, adding, taking away, making sure that there’s a consistent level of detail across the whole scene, that it’s nice to view from all angles, and that there are some interesting elements no matter where the camera is pointed. I’m a simple man, and all of this is done by poly-modeling inside 3DS Max. I only really start opening up other packages such as ZBrush if I’m tackling a complex organic structure.
In tandem to the modeling, I keep iterating on the textures and lighting/mood of the scene. This means laying down colours on the larger elements in the scene as early as possible, before working colour into the smaller elements. I’ll start texturing larger scale surface details pretty early on, but won’t focus on the nitty gritty surface detailing and small textural elements until well into production. I’ve made plenty of art before where I’ve focused with a fine tooth comb, and have either ended up with an unbalanced piece (with too much detail in compressed places), or I’ve simply ended up with too little time to maintain the same level of detail across the entire environment.
I always find achieving a nice colour pallette to be the biggest and most important task when texturing a scene, and I want to nail this down as early as possible, which will help inform the colour pallette for all of the smaller scale elements. I also don’t want to spend hours detailing an area, just to find out that the texture granularity/ colour pallette doesn’t sit correctly when the scene is viewed as a whole. In this respect, I always make sure I’m working in large brush strokes, which helps time management also.
Here at work we always talk about the ‘squint test’, which is a good tool to use intermittently when developing, to evaluate if a scene is heading in the right direction, in terms of composition, readability, and mood. If a scene isn’t reading properly when you’re squinting at it, something must not be working, whether that’s texture value/separation, lighting, or composition.
I’ve found that composition is a hard thing to focus on when creating these things, as the user can rotate it any way they want, so there’s never a consistent viewpoint. Instead of focusing on creating on a single, beautifully flowing shot, I try and just keep as many angles of the mesh as readable as possible, using a variety of standard practices.
- I try and keep contrast in values between horizontal/vertical surfaces to avoid walls merging into floors. (This obviously gets pretty difficult to maintain in areas of high density of dressing)
- I try and use colours from a small band on most elements in the scene, unless I want the viewer to focus on them. The above scene contained lots of pastelly teals and reds, with small areas of yellow to highlight things.
- I use separate colour groups to separate buildings, to avoid the entire scene becoming one big blob (with partial success…..). Above, the middle building has lots of warmer reds and creams, the building to the right has many more colder blues.
- I try and maintain areas of breathing space. I like to keep a high density of dressing in these scenes, but there should still be areas that aren’t just a mess of assets.
- I try and keep some space around things of great focus. A good example above is the cat sign.
For me, this is the single most fun aspect of developing for Sketchfab, and developing for a lot of videogames in general; the fact that the user can look around and examine all corners of your environment; they aren’t constrained to viewing your work from a single determined angle.
In terms of units, real-world scale becomes kind of irrelevant when making my dioramas, unless I’m designing them to be viewed in VR. What matters more is maintaining a consistent scale of props in relation to one another. I keep a few proxy models of a human dotted around my scene while I’m modeling, to keep an eye on the scale.
I also try and keep an eye on the scale of details. Keeping mesh density and texel density (in layman’s terms, the sharpness/blurriness of a texture on a prop) consistent is important to producing a scene that works well as a whole, and doesn’t feel like separate assets smushed together.
As far as performance goes, it gets a whole lot tougher, but it’s where a low poly, mobile performant mindset comes in handy. I only model in detail where it will matter and be noticeable, focusing a lot on the silhouettes of objects. I try and bevel a lot of edges if I can, since it can soften up the edges of a mesh; and 2 smoothed verts are as expensive as a single split-normal vertex, so it’s free, as long as there isn’t a UV split.
I also atlas map the majority of my textures, and reduce the amount of extra texture maps I need. Atlasing textures is a very powerful way to stay performant. The fewer texture reads (drawcalls) the renderer needs to perform, the faster it will render.
Texture reuse also helps keep things cheaper. I actually use less of this than I could, purely because I want so much bespoke detail in my scenes. I do however, only texture an asset or building block once if I only need to do it once (I don’t like making unnecessary work for myself).
One thing I keep in constant mind is that texture alpha is the devil when it comes to performance. I try and avoid overdraw wherever possible, and only really use alpha for smaller things that really need it, or things that are only going to have opaque things rendering below them (these are usually on the floor).
I think one of the key aspects of my dioramas is the concentration of bespoke detail. In most development scenarios, you would have a smaller amount of assets instanced around a scene, with far fewer bespoke props. This makes creating larger scenes quicker and easier, with smaller production costs, but I’m a glutton for punishment.
There are two major problems with having such a high density of bespoke content on a single texture atlas:
- Because I have so many single-use little assets in each scene, which all need separate UV shells, my texture atlas can become a massive, unreadable beast while painting in Photoshop. It gets difficult to tell what areas of the UV map relate to which assets without going back into 3DS Max to check. You can negate this by texturing straight onto the mesh using a package such as 3D Coat, but I far prefer painting these sort of textures in Photoshop.
- Working with a 4k PSD with large amounts of layers can also quickly become very tiresome to load/save, especially if you’re working on a 5 year old laptop.
To get around these two issues (and because it just seems like a tidier solution), I isolate small clusters of props in 3DS Max, which are textured with a far more readable UV layout. These are then projection baked into the final atlas UVs. Working in isolation like this also helps me concentrate on each prop a little better.
Projection baking props into the final atlas
In terms of texture style, I like to texture most assets by placing down a flat colour or gradient using a brush with a bit of granular texture. I then add soft highlights for edges, which is a great way to emulate beveled edge reflections, and I add any softer cavity dirt or occlusion. I then add any extra details with a nice painterly brush. I try and stick to the larger/chunkier details, rather than painting things like individual little rivets on panels, this way it reduces the amount of overall noise when you get all the assets together. Then I add adjustment layers to the props, helping to balance the textures, and bed them a bit better in the final scene.
The majority of my dioramas to date have been simple, flat coloured or diffuse-only scenes, but I wanted more sense of material physicality with my Tokyo diorama, (without spending a ton of time painting or repurposing smoothness/metalness textures). I got around this simply by having a few material presets e.g. metal/glass/soft plastic, with smoothness/metalness values set as a constant, which I then applied to the mesh using material IDs. This gave some nice glancing reflections on things like glass, and some soft specular highlights and bloom on plastics, but don’t break too much when they rendered over surface details.
The last piece of the puzzle is the outline around the entire scene. This is done simply by duplicating the mesh, pushing the normals, then reversing them. The normals are pushed out more on the larger shapes, so larger elements such as buildings have a thicker outline. Small elements don’t have any outline, so it doesn’t become a mess of lines.
I’m not an animator. I’ve tried animating in the past, and I’ve failed miserably. I can get by rigging a character and posing them a bit, but it’s nothing I would want to highlight in my dioramas. I do, however, like to include at least some dynamic elements in my scenes, in order to make them feel less static. The majority of moving parts in my dioramas are hard surface elements which are simply shifted, rotated, or scaled by hand across multiple keyframes. It’s a filthy process, but it does the job.
Some organic elements will need a couple of bones such as the cat’s tail below, but this is about as complex as I get when animating.
I’ve found 3DS Max motion paths are also a super quick and easy way to animate. You can simply draw a squiggly line in your viewport, and a mesh can follow it like a path. It’s how I animated the tram, and the little pokemon-style character.
Working with Sketchfab
Sketchfab has become a pretty powerful and intuitive platform when it comes to handling imported assets, which means that everything works pretty well by simply exporting a mesh from 3DS Max.
I try and keep my 3DS Max scene pretty tidy, which means most static mesh are attached to one another. Any props that are instanced stay separate until near the end of development, when I’m happy with them and the scene as a whole. These then get crunched down into the static mesh (I’m pretty certain Sketchfab doesn’t handle mesh instancing, so it’s no more expensive exporting the mesh as a whole).
I export everything as an FBX, which means that I can have multiple UV channels (for things like lightmaps), and materials will just work in Sketchfab straight out of the box.
Sketchfab has a variety of lighting modes/solutions, and I find that I use different solutions depending on what I need from the scene, and what kind of look I’m after. I decided to use dynamic lighting on this particular scene for multiple reasons:
- It’s a super dense scene, and I didn’t want to worry about lightmap resolution.
- There are a lot of moving parts which can’t have baked-down shadows.
- I wasn’t sure on lighting setup and direction, and I wanted to decide on this by editing/viewing in realtime in Sketchfab.
- The caveat being that dynamic lighting is a lot more expensive than static baked lighting (although it means less texture memory/reads).
Sketchfab also has a lot of post-processing features, which are fantastic, but use a lot of GPU so can greatly slow your scene down on many devices. I try and only use the ones that really help accentuate my scene, and help set the mood. I like to add sharpness in post, to make everything nice and crisp. I play around with the contrast/exposure, and usually, add bloom and (if I’m feeling flush) SSAO.
Thanks for taking the time to read this breakdown, and thanks to 80.lv for posting it! I hope it’s been helpful and it’s highlighted some of my processes a bit better.
For those that want to look a little closer at the Littlest Tokyo assets, you can download them for free from Sketchfab. Also, if you’re interested in seeing some of my other work, feel free to hit up my Artstation portfolio.