Read online Punjab News in Punjabi. We cover all kind of topics such as sports, politics, Bollywood, Pollywood and soon. Visit once for breaking news. https://www.rozanaspokesman.com/epaper
Very impressive article Jake! You are very talented.
nice article! i love seeing the breakdowns.
3d artist Sebastian Schulz talked about some of the important environment production lessons he learned during his first years in the industry.
My journey with games actually started already several years ago. I was a gamer as long as I can remember and my first contact with any kind of game development was back in the day when I played around with the Construction Kit that came with The Elder Scrolls 3 Morrowind. At that point I didn’t even know that there are people out there doing this stuff for a living and in turn I didn’t really think about wanting to work in this industry.
After that I worked on some awful mods for Morrowind and later Oblivion when it came out, on and off over the years, but never really built new stuff from scratch. I just used what the original developers had given me with the game. I also played around a bit with the editor of the original Crysis that shipped with the game back then.
In 2012 I really started to look a lot at CGI and was really stunned by what people could achieve so I just downloaded Blender and started doodling. After that, things happened quite fast. Half a year after I started, I had a basic knowledge foundation built up and started to mess around with game art. I started learning engines, mainly UDK back then and later got back into CryEngine. I also played around with Unity which didn’t really compel me as an artist back then because of all the limitations for artists in the free version.
As for the role, it quickly was pretty clear for me that I wanted to get into environment art, as the worlds of the games I’ve played have always fascinated me almost more than the actual characters in them. And so I focused on learning all the necessary tools for that.
From 2013 to 2015 I did a game art course and got a bachelor’s degree to professionalize my hobby. By that point I knew I definitely wanted to work in games and build awesome worlds for the players to explore.
In terms of companies I’ve worked at “over the years” there is not much to say as I only have almost 2 years of industry experience so far. Right after my degree I got pretty lucky and got hired at Crytek in Frankfurt as an Intern and eventually became a junior. I worked there up until this April and helped an amazing team ship “The Climb” and “Robinson The Journey”. Recently I joined Remedy Entertainment in Finland to work with them on great things.
Well, the first and actually also the most important thing I’ve learned when I started working in this industry is, that working on a game in a studio is vastly, vastly different from just sitting at home and working on personal projects. The tools are the same if you happen to work in a company that uses an engine and tools you’re already familiar with, so all the skills you’ve acquired are, of course, super important but from an artist’s perspective it’s just a lot more than “sit down, put on headphones, make stuff pretty”.
There is tons of technical things and dependencies that rely on you doing good and clean work others can work with and when you produce an error, other people are affected. It’s not like working at home where you never really need to set up collision and LOD’s for your scenes or go through a tedious optimization process. In a production environment the functionality of the game (in case of mesh collision for example) depends on you sometimes and the team will let you know really fast if you fucked up (in a friendly and professional way of course). That really was something I couldn’t quite handle for some time and was a big challenge for me to learn.
In terms of actual art challenges, “Robinson The Journey” was quite a big one. It was supposed to ship on PS4 in VR so the hardware we had to work with was a bit limited to begin with. On top came VR, meaning, 60 FPS times 2 all the time.
Frame drops in VR are a lot worse than in traditional games because if the game can’t keep up with your heads movement, the player will experience massive nausea. The optimization we did on that game was quite heavy and it took a lot of work to get it running as well as it did when it shipped. That meant that art assets had to be heavily optimized as well while keeping the visual quality up which was a real challenge at times especially with as much vegetation on screen as we had. (On that note: shout out to the uber amazing artists at Crytek! You guys rock!)
I’ve also learned so much stuff about CryEngine at Crytek (well, duh) which helps me a lot in my personal projects and in general, gave me a lot more insight into how an engine works overall which is invaluable.
So, to get this out of the way right away: My lighting is mediocre at best. I’m by no means a lighting artist. I like doing lighting to get better at it and to understand it better. I’ve learned a great deal about lighting at Crytek as well. Mostly from two extremely skilled lighting artists we had working with us but also from just playing around, seeing how things work and gathering opinions from other artists afterwards.
In terms of the capabilities of CryEngine’s lighting system, I’ve got to say that this is the part where the engine really shines. Don’t get me wrong, I really like the engine in general but the WYSIWYG editor makes it just so easy to tweak things, not only when lighting a scene. You drag a light into the render window and boom, there you go. No light map baking or anything else required (which is a thing I’m not a big fan of in general. I understand it technically but I just don’t like to work with it for personal stuff because it can be quite time-consuming.).
As for the lighting itself, most of it is just playing around with the time of day for outdoor scenes and finding a nice mood. I then take it from there and work with cube maps and fake lights to adjust spots I don’t like or just to brighten or darken areas. For indoor scenes I most of the time start with my “main light” (meaning for example windows in a living room, or just artificial light in a place without exposure to natural light sources) then baking a cube map, introducing GI and after that hand place fake lights.
Also with the introduction of the total illumination system, which is essentially some form of real time GI (please don’t ask me how it works) I have another very powerful tool in my toolbox to make the lighting even more realistic. This is especially great because GI is almost a must for a really realistic look as it brightens up areas in your scene you wouldn’t even have thought of brightening up which can produce really great results. Also, as said, it’s all real time which reduces iteration times by a huge amount
When I start with a material I usually begin by thinking about what environment I want to put it in because, as we all know, materials behave differently in different environments, especially in terms of aging.
So let’s pretend I have two steel spheres (like, IRL) and I place one of them on a moist beach and one of them in a desert. They for sure will look quite different if I come pick them up 5 years later even though they were made from the exact same material.
After I determined the environmental conditions that material is going to live in, I start gathering reference and try to find pictures of the specific case I’m looking for in terms of environment. Say, for my steel ball I placed on a beach, I might look at old shipwrecks on a beach. For my desert ball I could check for wrecked cars in Utah or Nevada and how the material behaves on them after it has lived and aged in that environment a certain amount of time.
I use Google image search and also check sites like Textures.com for photos of the material I want to make. I find myself more often than not these days getting better results from Google Images as you can find pictures there that show materials “in action” (interaction with lights from different sources/information about glossiness and specular values/aging, etc.) which is really useful because these days, with PBR, materials get defined a lot more by the gloss/spec and normal map rather than albedo/diffuse maps as it was the case back in the day.
As soon as I gathered all my reference, it depends on what material I want to create. I usually try to create every material I make solely in Substance Designer by now but if I have a very organic material (which can definitely be done fully procedural in Designer) I usually consult ZBrush and sculpt my base height map in there because it just is faster for me at this stage with my limited knowledge of the intestines of Designer.
After I finished my sculpt in ZBrush, I grab a height map of it, bring that into Designer and do the rest of the work in there. Having a real-time feedback of what the material looks like on your model or just fully shaded in general is absolutely great and helps a ton with tweaking things and getting the look right. I usually start with adding details to the height map I didn’t include in my sculpt. I like to just make a quite basic high poly of my material when I sculpt it and do a lot of the additional detailing in Designer if possible as it gives me ability to go back and tweak things at any time and inside one package. At this stage, I don’t have any color information and just work with a basic grey material with mid grey diffuse and glossiness to really be able to see all the information in the height map as this map usually drives all the others.
After I have the height information looking like I want it to, I move on to the color map as this one will later drive my gloss. For most of the organic materials I work a lot with gradient maps and gradients I’ve picked from photos of the real material. These are driven by different warped together grunge and noise maps as well as masked out information from the height map to get a natural looking surface and details that correspond with the ones in the height and normal. Color and height (at least for organic materials) are probably the ones I mess with the most. For inorganic stuff (say metal/paint/plastic) it’s usually the normal and the gloss. For metals, of course, the specular as well.
After I got my color map looking right, I use it as a starting point for my gloss map. I usually work a lot with levels and histogram range/histogram scan nodes to mask out or dial in information from other height driven maps or different grunge and noise maps.
This process changes a bit of course when working with hard surface materials. I usually do these in Designer as well but I find myself using Substance Painter more and more these days as it’s a really fantastic package. I used painter for example for a gun I made a while ago and it was a joy texturing it. I do weapons and hard surface stuff every once in a while as well to keep my skills diverse and my hard surface modeling fresh. And if I want to make a game ready hard surface asset, painter is definitely my go to these days.
It should go without saying but during this texturing process I re-export stuff an awful lot. I try to get the material into the engine and in my environment as soon as possible to see how it looks there and then tweak things that aren’t working. This is important because it may be the case that a material looks pretty great in all of the Substance Designer environment maps but looks totally different in your environment in the engine.
You can of course prevent that by capturing an “HDRI” from your in game environment and bringing that map into substance. That way you should have a pretty close picture of what your materials will look like in engine. There might still be differences though so double check and export a lot.
Usually, I go with the good old complementary color scheme if my scene allows for it. I’m a big fan of the classic “Blorange” (blue and orange/cold and warm) color thing, as cliché as it might be. I used this for example in my Bloodborne inspired scene as well as in the Alien inspired scene I did a while ago and again recently in my little Antarctica thing. I also really like the red /green color scheme. I tend to use it more for natural scene like the cave scene I did in the past or the little desert river scene I made a few weeks ago. But to be totally honest, I rarely have a specific color scheme in mind upfront and usually tweak things as I go.
After all, everything is allowed as long as it looks good (if that’s what you’re shooting for, that is) and most of the time just getting started and working and playing around with stuff generates ideas for me and through that I get to my color scheme eventually. If I want to make a forest, I usually want to make a forest. I make one tree and a rock, bring them into the engine, duplicate them around a hundred times, look for a nice spot and then start playing with the time of day until I get something I like and this leads to new ideas and so on.
As far as figuring out if colors work or not, it’s really as simple as “if it looks right and I like it, I’ll go with it”. For real things and natural stuff this is mostly easier than for fantasy or sci-fi as you can just look up photos of the real place (or setting) you have and see what that looks like in different lighting conditions/daytimes or with different colors (buildings for example).
If you put a photo next to your scene, squint your eyes and the colors look super similar, they’re most certainly in a good spot. That’s, as said, not as easy for fictional things. If I squint my eyes and look at a picture of some kind of alien hive in a cave with flying rocks, I still don’t know if those colors are “right”. In that case readability and the effect you want to achieve is above all in my opinion. If you want the hive to be bright orange, the aliens purple and the rocks yellow…cool, go for it. If those colors work (in terms of setting/readability/contrast/brightness) in that specific environment, that’s awesome. If they don’t just play around some more.
I’m usually having a lot of fun figuring out colors. I also sometimes do really silly things and make some objects bright pink and others yellow or neon green just for the fun of it and to see how it looks. And sometimes I find something that looks better bright orange than dark blue.
Modularity in game environments has been around since forever and there is tons of amazing information out there about it. The documents and GDC talks of Joel Burgess for example who is one of the great level designers at Bethesda. You’ll find it, it’s on YouTube. Just use the search box.
As for my approach to modularity, I usually try to build everything I build out of modules at first as it makes iteration and tweaking a whole lot easier. When I want to rebuild an environment from a photo for example, I usually go into Photoshop and color code different areas of the picture that I could build as modules. I also use a specific color for unique stuff like props and other things that could maybe go on a trim texture. This usually is the case for manmade structures like buildings but also in organic environments modularity can be extremely useful as well. The desert river scene I made lately for example used only two big rock modules for all the large rock faces.
When I sculpted these modules I made sure that they could be used in the widest variety of cases possible. That meant they should give me different visual information from each side. So this in turn gave me six rocks for one with just rotating them. By scaling them plus using two rocks together in different constellations, the amount of shapes I can built with them is really great.
That those rock look so different from each side is especially important since humans are damn good at spotting patterns. So if my rocks look really similar everywhere, the viewer could spot repetition quite easily which is always the worst thing in natural scenes. This is also quite curious as nature displays macro repetition in a really big way. A forest is full of mostly same looking trees (from a distance). A canyon is mostly full of rocks. They’re all rocks, repeating endlessly. But this macro repetition gets broken up everywhere by all the little details. All these rocks are rocks, but every single one of them looks different when you look a little closer. So even in natural scenes a certain level of “repetition” can be good when it is broken up with more unique details in certain areas.
There are other cases like architecture or manmade structures in general where you want a specific pattern or shape repeated over the building or structure. Repetition and patterns are a big thing in architecture and it can be quite helpful for adding a lot of details and visual information to an object with a relatively low amount of work. Repetition in architecture can look really impressive, given it is used correctly.
Another thing modularity is amazing for is fast iteration and map building. If I have an idea for a space and I got into my 3D package and built modules for it, I can bring the modules into the engine straight away in a white box stage and spread them all over the place if I want to. Not only to see if the modules work but also to get a feeling for the space I want to build with them. After that I can keep my layout in the engine and just working on refining the modules. If I finish a module, all modules of the same kind I have used in my map are finished as well of course. This means I get a lot more bang for my buck and can put more work and time into making the single modules as kick ass looking as possible.