The Future Of 3D Game Development Technology

Michael Pavlovich talked about the way modern tools change game content creation and how will technology will influence gamedev process in the future.

Another graduate of Ringling College and a famous 3d artist Michael Pavlovich gave us a little interview right after his sessions during Substance Day 2016.  Michael is an amazing technical expert, who was kind enough to talk about the modern jump in game visuals, modern 3d tools and how good middleware can make your life easier in game development.

michael-pavlovich-introtozbrushpart2

Introduction

My name is Michael Pavlovich, I graduated from Ringling College in 2005 with a major in Computer Animation. The Game Art BFA program at Ringling College brings our feature film aesthetic to games and is focused on providing students with the professional artistic skills necessary to create compelling and believable interactive experiences. Before I get into my experience, its important to have an understanding of what was happening in the games industry and development at that time.

2005 was the year the industry was ramping up on “next-gen” technology on the new consoles coming out: PS3 and XBox360.  The new consoles required totally new development processes.  Material properties driven by texture maps for games on consoles became new. The bump in overall computing power on the new consoles also meant more polygons, which sometimes led to a tendency to over-model in some situations. Eventually, once entire environments got into game, along with UI, AI, animation, characters, bones, etc. there would be a reevaluation of poly and texture budgets on the asset side, and if you’re lucky, you didn’t over-estimate the budget you were allowed. It’s all a balancing act, with each department vying for the highest percentage of what the processor and ram can handle for any given game situation to keep it at or above the target frame rate

1 of 2

michael-pavlovich-introtozbrushpart3

 

I started at Electronic Arts, Tiburon as an “Assistant Texture Artist”; I was basically in charge of taking a model blockout, UVing it, and lining up reference photographs taken by the team on location at various stadiums across the United States to make sure the tunnels, edge walls, electrical boxes and more had the appropriate “look.”  I textured a few stadiums for Madden ’06, then moved on to the NCAA team and did pretty much the same thing there, just for a LOT more stadiums. On the NCAA team the texture and model requirements were a bit more broad; because of the camera angles chosen and number of open-top stadiums, we ended up having to model a bit of the surrounding environment as well; little parallax cards for stuff like the mountain ranges at BYU and Arizona, all in addition to the usual texture treatment on the interiors of the stadiums. 

michael-pavlovich-colorfrontback

1 of 2

So my first foray into professional 3D work was environment texturing tasks, which was also my first hint that a number of real time solutions were based on tricks to give the illusion of fidelity, when in reality it’s a bunch of re-used atlased strip textures, re-used objects, re-used materials, re-used EVERYthing. The trick being to make it look like everything ISN’T being re-used. Environment art can be a big, complex puzzle you put together with a limited library of assets, with the intent of making the end result appear as if it’s a big unique experience, while making sure you utilize every last pixel you can squeeze out of a texture sheet, and every triangle in the shipped game has a purpose. Nothing wasted, since waste still counts against your allocated budget. In some situations, you can model and texture everything as well as you possibly can, and still be going over your allocated budget. Since gameplay is king, and you don’t want something that doesn’t play well or isn’t fun, maintaining frame rate can trump visual fidelity in some situations, and you’ll have to go back in and make some hard decisions. It’s those situations where you reach back and see if there’s any more tricks, any more non-intrusive re-use that you can sneak in here and there and see if you can maintain the visual fidelity you want without sacrificing performance. 

It’s all a big balancing act between visual fidelity and performance in all departments when you’re having to run real time on a console or a mini spec PC all while trying to accomplish what’s most important, which is a well made, fun game.

michael-pavlovich-blackjackbig

 

1 of 2
 
 
 

Long story a little shorter, I went from environment art at Tiburon to character art, where I spent the majority of my time doing pride sticker placement and face mask modeling, then went out to Sony Online Entertainment (now Daybreak) to work on DC Universe Online as a character artist. That ended up being a very slick pipeline, where a handful of character artists, with some very slick tools, were able to populate an entire MMO using the Unreal Engine, on the PS3, including rigging, weighting, cloth, materials, textures, and modeling. It was a very ZBrush-centric pipeline, with some excellent tools in place to really speed along the process. It was a very agile and powerful character pipeline process. 

michael-pavlovich-column-lineup

michael-pavlovich-zbrushintropart1group

I joined Certain Affinity as a character artist and also started doing a lot of environment work for a number of different properties.  Now I am a principal artist, helping out as much as I can with asset creation, along with pipeline and workflow creation and training.  During my spare time, I offer tutorial and demo work on my YouTube channel, and I teach 3D for concept artists and anatomy classes at the Gemini School of Visual Arts, as well as Intro to ZBrush classes for CG Master Academy online.

You’ve had a chance to work on Halo remake and the newest DOOM game, so my question is what are the main things that differentiate these games from their originals? What are the main things that differ those games? Like on a high level? 3d, maybe lighting? The question here is to understand what were the biggest advancements in the history of visual production in games.

In addition to the PS3 and Xbox360’s bump in storage, RAM, and processing power for more polys and bigger texture sizes overall, those consoles also pushed games visually via the use of materials and texture maps to sell those material properties; you’d have your normal map to give you the illusion of detail, spec map to give you the illusion of a glossy or matte material, put a little baked AO into the diffuse to have things “sit” a little better, on the environment side you could have a secondary UV set to bake lightmaps to have the entire environment “sit” a little better, that type of thing. Another big one was not only higher resolution shadows, but dynamic shadows…at the time usually only limited to one, but it was still nice to have anything help out that feeling of immersion within an environment or situation. 

4-1

1 of 2

As an extension of those improvements, the current generation of consoles has seen a big improvement in visual fidelity through the use of physically based rendering (PBR) techniques. More than anything, it’s a system to keep things consistent across an entire production staff. Right now it’s mostly being used to create photo real objects and environments, but there’s no reason the fundamental rules behind PBR can’t be utilized on a super stylistic game. The power of PBR is the ability to have everybody playing by the same rules. The people creating the materials won’t be putting in arbitrary values that happen to look good on their machine at some point in time on a shader ball in a void on somebody’s uncalibrated monitor; the people authoring the textures for those materials won’t be changing material parameters to sell their individual textures that could have arbitrary values that could be different from the texture artist sitting next to them, or even vary from asset to asset from the same person. The people lighting everything won’t be tuning to a whole library of different shaders and mis-authored textures, which can quickly become an impossible task. It’s easy to say those rules should always be in place and everybody in production will be authoring materials, lighting, and textures correctly, but until the PBR became standard, in my experience, that was almost never the case. 

halomccascension05

1 of 2

You’ve talked at Gnomon about your work with materials in Substance Designer. How does this software integrate to your production pipeline? Where does it help? How do you use it to keep the things systemised and controlled with this software? How does it help in game development in general?

With a PBR system in place, you almost have to go out of your way to break it. There are very specific rules in place and if something is broken, it’s much easier to track down which department needs to fix whatever is out of place to ensure that the system always looks correct. 

Another HUGE win on the asset creation side was the maturation of procedural texturing, in conjunction with PBR. Instead of hand painting and hand authoring every texture, you can now set up parameters utilizing maps baked from your mesh to drive automatic material and wear type and placement for multiple objects at a time. We set up Substance Designer graphs in conjunction with Substance Player way back on Halo 2 Anniversary for the Master Chief Collection to do quick vehicle, weapon, and character iteration, which worked great with the new PBR implementation in engine. We even had custom shaders written for use in the Substance viewport, so what we were seeing in Substance is exactly what we’d end up with in engine. 

1 of 2
1 of 2

 

Obviously the process and tools have matured quite significantly since then, but even back in its infancy, it was obvious having a quick way to apply approved materials and drive wear procedurally, even if it wasn’t 100% ship-able, was a great place to start from. The introduction of Substance Painter was a huge step later on in getting the art team more involved in a more artist-friendly, hands on way. Every month new and better ways to handle complex material and texturing tasks are being developed, and not only as a third party stand alone solution or plug-in, but also the ability to expose functionality in engine and iterate in real time is a huge step toward evaluating and iterating where it makes the most sense–the final product, in game.

Another big win is you get to have your best material people making materials, your best texture artist making the most convincing wear, so not only will everything be consistent on the material and wear, but if you have your best artists setting up the parameters, grunge maps, etc.,  everybody pushing their assets through that pipeline will look like the best material and texture artists in the studio. And the end result will look like the materials and textures were created by one really good person, when in reality it can be multiple teams working together to create the best possible materials and wear, and no matter who’s applying the materials or applying the wear, it’ll look consistent and as good as your artists set it up to be. There will always be room for improvement or story telling polish, but with your best artists iterating on the implementation of the tools and process, the end result will only get better and be faster over time.

1 of 2

During your talk on automation in Halo development, you’ve talked about the way automation and new tools let you cut the production time and save your time and increase the whole development process. What are the main artistic tasks that you can now do faster with modern tools? How do you optimize your production process?

The Substance Designer / Painter workflows detailed above explain a bit on the production side, but another big win in the preproduction stage is 3D concepting. As soon as a concept is created in 3D, we have the ability to quickly push it through the pipeline with proprietary tools (driven by both Houdini and Substance) so we can immediately start evaluating and playing a concept sketch during playtests, on both the environment and character team. The most recent example of this would be the Relic map developed for the Master Chief Collection. Instead of spending the majority of our time with mood paintings, paint overs, and callouts, the majority of the concept and iteration was done in 3D. 

michael-pavlovich-spotlightelite002

There were mood paintings done early in production for sign off approvals.  , We also needed to make sure we had a consistent visual target for the final product.  The bulk of figuring out what the objects needed to be was throwing quick blockout sketches into game, with automatically created game geo, UVs, collision, as well as first-pass materials, lighting, and wear. We were able to playtest from the blockout sketch stage all the way through final polish pass, and at every stage we were able to make informed artistic decisions by evaluating the environment through the eyes of the player, which saves a huge amount of time while iterating on assets. These savings, in conjunction with photogrammetry, world machine, and Substance Designer, allowed a relatively small team to build and polish a level in a short amount of time. Every iteration pass on any given asset was always making it tighter and more polished, with the bulk of the object already figured out and being evaluated during every playtest. Past the concept blockout first pass, there were no real surprises or major changes needed.

It can get confusing when interpreting a 2D concept of varying degrees of accuracy or “done”-ness into a final asset is where to allocate those hours– if your production artists are having to spend hours of modeling and iteration time fleshing out a sketch, or properly integrating it into a 3D environment. In my experience, it usually ends up being eaten by the production team, and while you can pad time estimates to account for time taken to interpret views not fleshed out by the limited 2D concept, it can be difficult to say how long that might take depending on the initial concept and the final asset.

michael-pavlovich-eliteformrefine

Another thing that 3D concepts tend to do is make someone’s (or a group’s) vision more “tangible”, which promotes forward momentum and decision making. I’ve found that when developers are allowed to iterate in word clouds and thumbnails, the ideas tend to run in parallel with each other, where iteration cycles are spent not on fine-tuning a good idea, but throwing out more and more ideas.  This is part of the exploration process, but at some point you have to get something tangible in and iterate on it in context, and doing 3D concepts, psychologically I suppose, gets people much more into a production mindset by default. If 3D is detrimental or limiting, its important to re-evaluate your process.

Another analogy I like to use is a picture is worth a thousand words, and a model is worth a thousand pictures. Even if it takes a little longer to sketch out a 3D model and get it in game, a thousand concepts worth of information will be gleaned,

Substance also allows the exploration and iteration process in the hands of those making the final decisions. You can set up a simple substance with exposed parameters, a custom shader, and multiple user-supplied inputs, and allow anybody who has eyeballs and the ability to move sliders to dictate the final look for any given material or object. This feature removes a lot of back and forth between approvals, making the process more efficient.

michael-pavlovich-elitev2-render

This flexible and powerful feature also tends to hold people accountable as well. It can also help solidify the fact that sometimes an idea that works really well in your head doesn’t necessarily translate to the engine quite as well as you’d think, and the sooner you can come to this realization, the sooner you can stop spinning your wheels in imagination land, and start iterating on tangible changes, that everyone can see in engine, without interpretation

You’ve talked about the wonders of automatisation also with the use of Houdini? A lot of dev still have doubts about this software, believing it better fits the VFX industry. How can you use it in games? How can you benefit from it’s usage? What are the main functions that this software can help you with these days?

A game changer in the future will be the maturation of procedural modeling in conjunction with procedural texturing, which will allow development teams to produce higher fidelity “blockouts” (and more) in significantly less time, and of course have the power to iterate on it all in real time in engine. I discussed this in more detail in my GDC 2015 presentation “Blurring the Line Between Concept and Production”, as well as the importance behind designing / concepting / iterating in engine.

Houdini has a ton of untapped value. Just as node based texturing added an incredible amount of power and flexibility over traditional methods, node based procedural modeling will have the same impact on production pipelines in the future. And of course not just pushing polygons around, or clever ways to streamline modeling workflows alone, but the ability to add particle, cloth, rendering, baking, animation, damage, wear, etc.  Pipeline solutions will be a giant step forward in the development process across the board.

michael-pavlovich-thermo-radiant-beam-phase-2-tweaked

1 of 2

Extrapolating the progress of the hardware: we’re always going to get more storage and power on the user side to push more polygons and higher resolution textures, and eventually, when everything goes to server-side rendering, and minimum spec is no longer an issue, we’ll have situations where the only limitations in development are what can be created, and how they can be utilized by design to give you the desired experience.  It is less about optimization and more about how to utilize photogrammetry, procedural texturing, procedural modeling, and whatever the next big developments are in tools, pipeline, workflow, animation, and more to create bigger and more believable worlds.  We will see faster and less breakable solutions to get assets placed and materials/textures/lighting done where more time can be spent iterating and polishing the final product in an informed manner, in engine, in context.  These huge savings in time iterating and polishing what matters, as well as spending the majority of your time making things look great, and/or more complex world building will help get the product to its final stage and ready to ship.

michael-pavlovich-guardian-sonic-beam-phase-3

1 of 2
1 of 2

Overall, how do your think procedural tools will help us in game development now? Where do you think is going to be the next step in the technical art department? What tech will change the world? 

With the advent and continued popularity of VR and AR, film, games, manufacturing, architectural visualization, forensics, historical preservation are tackling the same issues of performance and fidelity, so I can see a ton of really exciting development solutions coming out of these industries in the months and years to follow. Great time to be alive!

michael-pavlovich-artstation-closeup

Michael Pavlovich, Principal Artist

Make sure to check out Michael‘s Artstation page, where he hosts a bunch of wonderful tutorials for Zbrush and Substance lovers! It’s all FREE!

Interview conducted by Kirill Tokarev.

Join discussion

Comments 0

    You might also like

    We need your consent

    We use cookies on this website to make your browsing experience better. By using the site you agree to our use of cookies.Learn more