Mederic Chasse, VFX Technical Director at Ubisoft, talked about the Real-Time VFX field, its main advances and development lags, future challenges and more.
Mederic Chasse, VFX Technical Director at Ubisoft Singapore, kindly talked about the Real-Time VFX field, the current stage of the real-time graphics development, main advances and lags, future challenges and the reasons behind them. An extremely interesting talk to dive in and speculate about the breakthroughs coming ahead.
Introduction
80Lv: Mederic, could you introduce yourself to us? Where do you come from, what do you do, what projects have you worked on? How did you get into the development of 3D effects for games in the first place? How did you get into this market?
Mederic: Well, I got into 3D arts after finding out that banking was not my thing, plus in fact, I was drawing doodles and cartoons during basically all of my classes in school. Now I kind of wish I spent less time drawing and more time listening in Calculus though… Oops!
I’m from Montreal Canada, quite a hub when it comes to video games development, so I got to get into the “modern” gaming industry quite early on (back in the PS2 days, circa 2004). First game I ever worked on was Happy Feet PS2, the penguin thing. I was a 3D level artist doing modeling, texturing, lighting, – basically, taking an empty Maya scene and making a full level because we had no engine really. We’d do everything in Maya then save the scene and send it to a build machine to hope it looks the same on PS2. Back then, on PS2 we didn’t have access to custom shaders, and I wanted to push my maps a bit further, so I turned to the VFX pioneers we had at that time to explain to me how the particles worked. I wanted to add some fake reflection on the ice and some dripping water droplets from the icicles. So I learned from them that UVs are not always meant to be pixel perfect and that particles were a thing. After having way too much fun with these things (and exploding the performance budget on several occasions), it wasn’t long till I made the switch to VFX full time. Today, I am VFX Technical Director on Skull&Bones at Ubisoft.
Real-Time VFX Advances
80Lv: In terms of technology, what do you think were the biggest advances in real-time graphics you’ve witnessed so far? It looks absolutely astonishing what you can do with modern tech, but it would be awesome if you could talk a bit more as to what you consider to be the biggest advances?
Mederic: I think the biggest advances in Real-Time VFX are still a bit under the radar, they are the unsung heroes in our beloved games. To me, the single most impactful tech advancement we’ve seen in this field in the past few years is shaders, specifically vertex shaders. In order to get why it made such a huge impact, we have to go back a few generations.
While the technology, in general, has really advanced, VFX and animation are still lagging behind quite a lot, they’re the prime place where you’ll notice the uncanny valley. Also, since the PS2-OG Xbox era, we’ve lost a bit of ground. I know, it doesn’t sound plausible since VFX looks better now than they ever did then, but let me explain it!
Back in the PS2 days, we didn’t have anything like PBR rendering. Everything was a huge approximation. Shading was basically Lambert and Phong and that was pretty much it. Particles didn’t have lighting at all in most games, it was all baked in textures or made with tricks using additive blend modes. But it looked ok because everything on screen was of similar tech and similar quality. The PS2 was especially (in proportion) very good at handling overdraw. So we could throw in a lot of particle sprites in lieu of complex materials. We could use transparent planes to fake shadows. I remember doing a windmill where the shadow was a rotating particle in black semitransparent since we didn’t have real-time shadows for anything that wasn’t the main character.
Then Xbox 360 / PS3 came out. Shaders became a thing, PBR / HDR was starting out, and browner graphics were to be had (thanks to Need for Speed: Most Wanted). There was a significant Lighting Improvement, but the VFX side saw almost none of it. For the most part, we added emissive and the ability to do some UV manipulations but we lost most of our ability to do things with overdraw or stacking particles. That’s when there was a huge boom in the usage of animated textures because memory went up and overdraw went down. So what used to be done with cycling 20 particles using 1 or 2 images and relying on flips and rotation to create a campfire was now made with 2-3 particles living longer and cycling a 16 frames texture animation.
That was also the time where we saw a huge surge in using external tools such as FumeFX and similar. We had a problem though: we were still baking lighting in the textures in a time where dynamic Time of Day was starting to become real. We had to do a lot on a single layer because overdraw was a real issue. I had to set up most of the fire VFX on the E3 demo for Assassin’s Creed: Revelations in the scene where Ezio basically sets the whole port on fire before running through burning ships for an escape. It was a real challenge to put fire everywhere with as little overdraw as possible without any walls in the way to block visibility. Shaders saved the situation and made it possible to create convincing (for the time) fire on a single layer.
Ps4 / Xbox One continued the trend: better 3D lighting, PBR became standard, overdraw still a must-avoid. We didn’t get so much more memory that we could upscale everything in huge textures but what we did get was more processing power to do more shader calculations. And that’s what we’re seeing now, the power of shaders behind VFX. Vertex shaders are driving water effects or creating convincing displacement for footsteps in the sand/snow or tire tracks in the grass/mud. I suggest that you guys take time to watch Naughty Dog’s video about how vertex shaders add so much to Uncharted 4.
However, even though now most engines have gone Physically Based Rendering (PBR) and are using Deferred rendering, in VFX we’re still left with essentially previous gen tech. We see this problem in all of its glory with explosions VFX. We try to make the explosions look like they are a part of the scene, but the scene is Physically based and rendered in deferred rendering, while the explosion is absolutely not physically based and rendered in (most engines) forward rendering. So from here, it’s going to be hard to really blend those things in.
Explosion VFX Issue
80Lv: Could you talk a bit about the problem with lighting and explosions? You’ve mentioned that normal-powered explosions usually look not right enough with real-time lighting scenarios. Could you talk a bit about the problems here and how they hurt the visuals in general?
Mederic: As I was just saying the main issue is that while the rendering has majorly moved into PBR and deferred lighting, the transparents (most VFX), for the most part, have not. Our current PBR models are great at replicating with a satisfying degree of accuracy and the lighting for surfaces. And normals are an integral part of that. So it is very natural for an artist who has joined the industry after the PS3 days to gravitate to a lighting = normal maps mindset. But when it comes to smoke, fire, etc. – those are not surfaces, they are volumes. So normal maps which define in what direction a pixel on the surface is facing falls flat as a technique because we put a volume that has no surface, to begin with, on a surface (particle sprite) while trying to apply a surface-based lighting approach to a flattened volume. And then we ask ourselves why it looks flat!
While we know how to light volumes based on accumulated density and raytracing, on gaming console we’ve mostly only been using that for cloud systems. We can do that for clouds because we don’t need to recompute everything at every frame. You can see those in action in the latest Assassin’s Creed games, and also in Guerilla Games’ Horizon Zero Dawn. But when it comes to gameplay explosions, it demands too much and we simply cannot afford it yet. We’re very close though, and we can do some pretty neat things already on beefier PCs (for reference, watch the video Realtime Simulation and Volume Modelling Plugin above). I am really looking forward to seeing the specs on the next gen of console, especially after Nvidia’s RTX announcement.
Animated Textures
80Lv: What makes animated textures such a popular tech for the production of explosions in real-time scenarios? I mean, we’ve seen this tech in most of the modern games, starting from god knows when. It seems like the only thing that progresses is the resolution and the maps become bigger and bigger! It would be awesome if you could talk a bit about the way games classically approach explosions and what you think would be the next thing here.
Mederic: As I’ve mentioned earlier, it really rose in popularity in the early PS3/Xbox360 era. The size of the textures we could use jumped while the overdraw permitted reduced dramatically. So it became a really natural way to do things. Especially since in most games of that era (DirectX 9), Time of Day was mostly static, lightmapping environments were the norm. So we could bake out some self-shadowing in those animated textures in attempts to blend in better with the environment. It was also the time where Houdini / FumeFX was starting to become a thing, so artists could simulate explosions and render them into animated textures. This gave a huge boost in the realistic look for them. However, the problem was that it was all baked. So things like self-shadowing were only working when looking a certain way. A lot of games got around this issue with a few techniques. Here are a few examples of classically hiding problems:
More fire!
- If there is more fire, more emissive, then there is less need for the smoke.
- Uncharted 2 Gameplay explosion illustrates this quite well. We see fire and dirt, but we leave out things that need lighting (like smoke) out whenever possible.
Using black and dark gray smoke
- When the smoke is really dark then the balance between light and shadow is harder to spot, so it doesn’t look so bad when that light vs shadow thing is, well, mostly inexistent.
- It means that a few details in the texture can be enough to fake it.
- Also when light vs shadow is close in colors, we see very little directionally recognizable details, so inaccurate self-shadows (or a lack of them) don’t immediately jump out.
I’m going to use a Call of Duty image to illustrate this point:
While those were techniques used in the past with success, they start to look dated really fast in today’s expectations. So while traditionally baked animated textures are still very popular and we’re still using them heavily, we’re moving more towards baking calculations into them rather than baking a final look. The main issue moving forward is the memory. The more we bake, the more memory we need. If we want a volume texture animated, then we’re talking about 4D textures, and there is no way we get this kind of memory even in the near future. The fact is, processing power is climbing much faster than the memory size (and the speed of access to such memory). To a point that RTX is on the horizon, it’s much more practical to think about simulating in real time than baking things into textures.
Water Effects
80Lv: Let’s talk a little bit about the other huge effect that is incredibly important for modern games: water! How does water usually work in the 3D game environment? What is the general approach to this tech and how do you make it work in the real-time engine?
Mederic: Well, everyone has their own special way of doing water. A lot of it is based on common well-known things like FFT to create real-time waves, then pass those into vertex deformation to make a mesh deform as water would. Then shading comes in to make it look like water with specular, translucency, etc. making it as believable as possible. That being said, the main problem is not the water itself, but rather the interaction with it, as long as you are just looking at it, it looks rather okay. But when you put a character to swim in it or things to float in it, that’s where things get dicey. In order to solve those issues, we really have to dig deep using techniques from all gaming dev eras: mesh, particles, UVs, vert colors, frame effect, render to textures, flow maps, look-up textures… anything and everything we throw in – well, not all at the same time but, it’s a lot of “right tool for the right job” to play around. For examples, traditional particles are great for water impacts but terrible for small flows around objects unless they are rendered in an off-screen buffer and injected back into the water material. But then that requires more memory… Meshes are awesome for getting shapes the main body of water won’t be able to do (due to performance limitations) but usually fall behind in details and highly subject to falling in the uncanny valley. All these things end up creating overdraw and other performance problems. So it becomes quite a battle and a never-ending quest to find better ways to get the most out of as little as possible. In the end, it depends what’s water for you. Is it an ocean, a puddle, a raindrop, all of these? They will have different methods to be done.
Real-Time Water
80Lv: Do you think we could figure out the way to simulate water in real time instead of just faking it in a way? Will our tech will ever be there? How does that work? Could you talk a little bit more about the way you think water simulation is going to go?
Mederic: It’s already here, but not entirely. We can simulate waves in real time and mostly in 2D (only surfacing realistically) but it’s a problem of the scale. Simulating fully 3D water is similar to, say, computing what happens to each plastic ball in a kids’ plastic ball pit when someone jumps in it. The smaller size and higher quantity of balls will make it look closer to a fluid while dramatically increasing the cost. Now imagine an ocean-sized ball pit. That’s a lot of balls to calculate. We’re just not there yet and that’s mostly why current gen tech is more about getting the surface right rather than everything else that happens underneath it.
I think we’re much closer to get something nicely simulated like rolling foam on crest using real-time simulation than a fully simulated ocean. We are, however, able to use Houdini FX and fluid simulations to bake those simulations into usable data in the game to get credible shape and animation. But after that, it’s up to a VFX artist to use those creatively and make the most of it. We’re seeing a lot of cool stuff starting to pop up around the industry with those. A very good example of such tech is in Rise of the Tomb Raider:
Future Real-Time VFX Development & Challenges
80Lv: In terms of research and development, where do you think the main forces will go in the next couple of years? What do you think would be the greatest challenge for the real-time VFX in games?
Mederic: A bigger push on the shader side, we’ve only scratched the surface. Also, I think we’ll see a lot more raymarching based stuff. The next holy grail for VFX is transparent sorting and transparent lighting. Accumulation and density-based lightings look really promising for real-time effects. It’s a trend in computing right now where processing evolves faster than memory. And when it comes to Real-Time effects, the more we calculate the less we bake, and the less we bake the less memory we use. However, with ever-increasing importance on simulation and the rise in simulation tools, it’s increasingly easy to forget about the 12 principles of animation, – anticipation, staging, follow-through, exaggeration, and timing, to name a few, – which are critical to delivering an impactful, dazzling or even emotional experience. We are getting closer to the point where we’ll have enough processing power to use a lot more real-time simulations in VFX and the key challenge is going to fall on how well VFX artists will be able to bend those simulations to keep making use of those principles of animations.
Advice for Learners
80Lv: Final question: would you give us some recommendations for people who’d love to start learning more about VFX? What are the good books, videos, tutorials we could start playing with? Thanks!
Mederic: Definitely brush up on the 12 principles of animation. I would argue that it is the most important thing. If those can make a box interesting (check it out) they can make any VFX amazing. The Illusion of Life book is a great resource for this. Then you can also start from this RealTimeVFX page and begin getting your hand dirty! The RealTimeVFX forums are the place to start. It’s very active and full of resources, so don’t be shy to ask around.
The video below is also a good introduction the to particle system:
I think real-time VFX is one of the most exciting fields in the video game industry currently and it takes both art and tech in equal measures, so there is never a dull day, always a new challenge and plenty of “eureka!” moments to be had. But most importantly, it’s fun as hell.