Tweaking Megascans for Games
Events
Subscribe:  iCal  |  Google Calendar
Moscow RU   16, Oct — 18, Oct
Helsinki FI   17, Oct — 25, Oct
Minsk BY   17, Oct — 19, Oct
London GB   22, Oct — 23, Oct
Singapore SG   23, Oct — 25, Oct
Latest comments
by Thomas Guillemot
2 hours ago

Well that's a lot of hats !

by Thomas Van Fossen
7 hours ago

So why not finish the project but making it super generic? Strip all star wars terms out. Then when the game is finished, allow for modders to make a conversion mod that will reinsert the star wars material? That way he can finish it and we all can get what we want and no one has to give up on their dreams.

by Kelvin Hubbert
9 hours ago

When are you guys going to make God of War 5?

Tweaking Megascans for Games
3 April, 2017
Interview
Cinematic artist Joe Garth talked about the way he experiences with Megascans in Cryengine environment.


Introduction

Hello, everyone, my name is Joe Garth, I’m 26 years old from Sheffield in the United Kingdom. I’m currently a Cinematic artist at Crytek. At 18 years old I was offered a position as a Cinematic Intern at Crytek in Frankfurt am Main. I was given the chance to work on the Crysis 2 Intro Cinematic and some off-line rendered ingame cinematics. I then took the Cinematic Artist position and worked on trailers, cutscenes, promotional material for many Crytek titles such as Crysis 3, Warface, Ryse, Robinson: The Journey, The Climb, Arena of Fate, Homefront 2. More recently I’ve been working on Virtual Reality experiences, such as Codename: Skyharbor a VR Benchmark.

My main goal has always been to push real-time rendering to it’s limits. These days we’re very close to reaching the quality of big budget Hollywood special effects, but without long render times. I don’t think it’s far fetched to say that in the next 10-15 years almost everything will become possible in real-time.

Megascans

Megascans is a game-changer, it’s the ultimate lego set! I think generic scanned asset libraries are causing a big stir in the CG world. It’s a bit like how samples shook up the 70’s music industry, some people might be hostile to the idea of using scans. There’s a fear that everything will end up looking the same, but I think that’s really down to how you use the assets. A similar thing happened in the concept art world- first with photobashing, then 3d kitbashing, now pretty much all concept artists and visual designers are using 3d, some even exclusively. I think what matters is artistic and design knowledge, the foundation skills have to be there, and a good sense of taste. In the future it will be very simple to make creative realistic scenes, without much in depth technical knowledge.

The new 3d plants are superb and I hope that larger vegetation assets become available in the future, for example full 3d trees- or even mountains generated from satellite data. The ability to quickly construct nice environments, without having to create all the assets from scratch is a huge step forward. Once a large enough asset bank of scanned vegetation is assembled the time needed to create complex natural scenes will drop significantly. Studios like Crytek and DICE have shown just how close to photorealism we can get in real-time, but those scanned asset libraries are internal and custom made for individual games. What Megascans is doing is unlocking that potential by making high quality scanned assets available to all artists.

Leaves

The scene uses only one 3d plant, I rotate it around at various angles catch the light in interesting ways. The ground below is a tiled texture, I’m not using any displacement mapping, the normal maps give a good enough bump effect. Once you have a good material and lighting setup almost any angle will look okay, but there’s still the difference between a photo and a good photo. I played with the positioning/composition/lighting to cover anything obviously artificial (2d planes, stretched textures, steppy shadows etc) and find a good aesthetic balance.


Lighting

The lighting is actually very simple. I think people tend to overcomplicate lighting, sometimes I see artists place dozens of light entities scattered around faking various bounce lights, rim lights and so on- when really just 3 lights in the right place could do the same job! I usually find that the best looking scenes have rather elegant lighting solutions. In both my plants scene and mountains scene I use just the sun light, an environment probe (cubemap for reflectance). There’s also some post processing going on with the depth of field and a subtle color grade.

I use the CRYENGINE Total Illumination feature (SVOTI) which can add a bit of extra realism, but it’s not essential. There’s actually no magic, it’s just a process of finding good lighting reference, setting realistic values and avoiding the obvious real-time CG pitfalls. That’s things like excessive fog, bloom, dark or pixelated shadows, lack of ambient occlusion.

Reference examples:

Using Cryengine

Megascans assets have a real-time option specifically designed for game engines. You can also choose various PBR workflows. CRYENGINE uses a Specular and Gloss workflow.


PBR is actually very simple once you break it down. There are 4 texture maps required, Albedo with opacity in alpha, Normal with Gloss in alpha, Spec (which can simply a solid color) and for vegetation a Subsurface map.

For Megascans, the Albedo can usually be left alone. For the Normal map I boost the bumpiness in the channels, this helps the leaves look a bit more 3d. With the Gloss and Spec maps it’s important to use realistic values. You can find the correct values in the CRYENGINE PBR documentation here.

Another helpful tool I use is the CryENGINE View modes. These let you quickly see the various render passes, and make sure materials are consistent and that values are physically correct.

Optimization

I use the real-time meshes and try to avoid any advanced features like SVOTI and Displacement mapping. Over the last two years I’ve been working on lots of VR scenes. When you’re only allowed a low number of lights, and none of the advanced rendering features, the assets need to be spot on and the composition and colors need to hide any rendering shortcomings. During the optimization phase of certain VR projects I was coming into work, spending hours tweaking lighting to shave just 0.3 ms off the GPU frame time, then coming in the next day to do the same thing. It sometimes feels like you could get a scene running on a toaster if only you had the time to optimize! Optimization is a good habit, it speeds up your workflow and rendering times and makes everyone’s lives easier during production.

Peculiarities 

One of the shortcomings of most game engines is the shadow map resolution, for this scene I had to tweak a few CVARs to increase the quality. It’s one of the only times you can really go crazy with depth of field!

Camera and Scale

The camera has an extremely low FOV, something like 5 degrees. I do that because actual film cameras use extremely narrow FOV. It flattens the image, which usually looks better. The camera is actually 10-15 meters above the plants looking down.

Intensity of the Light

Generally, I just try to match a reference image. I don’t exactly copy the reference color values, but I eyeball it and try to make it as if they could exist in the same universe. You can get very far just by tweaking sun color, sun direction and fog.

Are these assets flexible enough to be used in game development?

I think it depends what sort of game you’re making. Usually, when you think of a stylized game, you think of something cartoony like Super Mario or Journey. These assets are obviously photorealistic, but then photorealism is not an art style – it’s more like a quality level that has to occur before stylization can take place.

Even in heavily stylized games like Journey, there’s usually an element of photorealism.

Lord of the Rings, Blade Runner, 300, Sin City, those films have photorealistic visuals with very unique and recognizable art styles. Stylization can be created through design, lighting and cinematography and the same is true in games. I’m definitely going to be playing with that in the future and I think the assets will be used in some very inventive unexpected ways. I tried a more exaggerated forest lighting scene last year.

There’s no reason these assets can’t be used in combination with physics, real-time lighting or prebaked simulations. It’s not a big deal to hook the 3d plants up to the CRYENGINE vegetation physics and have them react to the player just like Crysis.

Future ways to grow for Megascans?

While Megascans does take away a lot of the bigger headaches e.g. generating your own hhigh-qualitymaterial. It does have the potential to distract people from the importance of foundational art skills. You still need an understanding of composition, lighting, color theory, post processing etc. to make these scenes realistic and aesthetically pleasing.

I think the biggest restriction is simply the amount and variety of assets. Imagine you have this perfect reference photography of a beach, but there’s a certain type of rock that you’re missing, or a forest with a certain type of tree. Right now the library is mostly natural elements, but I could imagine it incorporating artificial materials like metal sheets and panels, stonework, wood planks, cloth, ropes.

It’s definitely an exciting time to be in the industry, and i’m really looking forward to how this and other libraries develop.

Joe Garth, Cinematic Artist at Crytek

Interview conducted by Kirill Tokarev

Follow 80.lv on Facebook, Twitter and Instagram

Comments

Leave a Reply

1 Comment on "Tweaking Megascans for Games"

avatar
olekkus
Guest
olekkus

As always the best !

wpDiscuz