People Can Fly Engineer on DLSS, Nanite & the Future of Gamedev

Principal Graphics Programmer at People Can Fly Peter Sikachev has discussed game development from a programmer's perspective, shared some thoughts on DLSS and how it helps developers, and explained the difference between older and newer consoles.

Introduction Please introduce yourself. Where did you study? What companies have you worked for? What projects have you contributed to?

Peter Sikachev, Principal Graphics Programmer: Hi, my name is Peter Sikachev and I graduated from the Faculty of Applied Mathematics and Computer Science of Lomonosov Moscow State University in 2009.

Initially, I wanted to pursue an academic career, but after failing my Ph.D. at TUWien I decided to give a try to my childhood dream: game development. I guess it worked out a bit better since I've recently celebrated my 10th anniversary in the industry.

Throughout my career, I worked on such titles as Skyforge, Thief, two installments of the Tomb Raider Franchise, Deus Ex: Mankind Divided, Red Dead Redemption 2, and, most recently, Cyberpunk 2077. Currently, I am a Principal Graphics Programmer at People Can Fly, which is also a good place to work on some great triple-A projects.

Getting Into Programming How did you get into programming? What made this direction perfect for you? Could you tell us about some of your tasks?

Peter Sikachev: In primary school, I got introduced to video games through such titles as Battle of Britain, Secret Weapon of the Luftwaffe, Settlers, and Silent Hunter. That's when the dream of making something like that one day was born, I suppose. In secondary school, I've started playing with Logo, along with my brother. Logo is a programming environment for kids, where you have actors ('turtles') that you can control through a list of commands. In the version we used, one could also assign sprites to those turtles, thus we had a specialization from the get-go: my brother became an artist and an animator, drawing sprite shapes, and I did the programming since I always enjoyed maths a bit more. I remember being really proud to use a parabolic equation from the math classes to create a realistic grenade fly in a short animation we produced!

Further, my passion for math and joy of seeing moving pictures rather than a row of numbers as an output of my code got me interested in 3D graphics, naturally. As a graphics programmer, I worked on many components of the rendering engine: lighting, shadows, particles, terrain, water simulation – you name it! I don't have any single preferred domain, I love all areas of real-time graphics programming.

Consoles PS5 and the latest Xbox consoles turned one. What did this step from PS4 to PS5 mean for game developers? Could you explain the difference to those who didn’t dive deep?

Peter Sikachev: As with every generation change, we got a performance boost across the board. The introduction of the SSD was arguably the biggest change – maybe not directly for rendering, but for streaming open worlds. Apart from that, we have received some new toys to play with. For instance mesh shaders and variable-rate shading. I'd say that we're still trying to embrace those and learn how to better use them. We all heard about the strengths of new consoles. What are their main problems? It’d be great if you could share a couple of examples.

Peter Sikachev: As I mentioned, in my opinion, we didn't get as much added performance across the board as we did during the previous generation switch. Yet, the quality expectations jumped again. Besides that, the ray-tracing support of the next-gen consoles seems really basic compared to NVIDIA RTX cards. Otherwise, we didn’t experience such a drastic architecture change between generations like the previous time, therefore I don't think we can expect many new issues.

Thoughts on DLSS What’s your take on DLSS and similar tech from AMD? How does this tech help developers create better-looking games? What would you like NVIDIA to improve?

Peter Sikachev: DLSS and CAS/FSR are really life-saviors in the times of 4K! These technologies allow us to render the scene in a lower resolution and then upscale it to a target one with minimal losses – thus allowing us to save computational power on shading those extra pixels and invest it elsewhere. As with any technology, there are lots of corner cases, such as transparent geometry, that don't output to depth and velocity buffers, and thus could be smeared by such techniques, but one could try to mask it out. It would be really cool to have some technological magic that would allow us to avoid artifacts on moving objects that don't have velocity info, but really need antialiasing, such as vegetation, as outputting velocity could be really expensive for it. I know that is a long shot, but maybe wizards from NVIDIA/AMD are reading this?

Illumination What are the main challenges when illuminating complex vast game environments these days in terms of programming?

Peter Sikachev: Up to this day, real-time global illumination is arguably an unsolved problem. There is a challenge hiding in every word of this expression! "Real-time" means that we can't precompute it once and for all, and will have to pay a performance cost for evaluating it in runtime. "Global" means that you somehow need to get the light bouncing from multiple spots around you. You need to have some scene representation (which might not be wholly visible on screen) and trace inside that scene.

In general, the tracing process is not really friendly to GPU architecture, which is more specialized for rasterization, as neighboring rays can diverge, making SIMD execution inefficient. Finally, depending on the surface properties, we might need a more diffuse GI or more glossy GI contribution – and the methods for obtaining those could be drastically different.

Nanite The past year was all about Nanite and how it will allow developers to use millions of polygons in a game. What’s your take on the tech? What are the limitations?

Peter Sikachev: I think that even if this technology is not as good as it's advertised (I hope artists don't take it literally!), it is really a move in the right direction. Approaches based on the visibility buffer have been there for a while now, and the reasons behind it are fundamental to the hardware and asset evolution – ALUs get relatively cheaper and cheaper, and triangles are getting smaller and smaller. In current Epic's implementation, it doesn't work on animated models, transparent assets, and alpha-tested geometry. Interestingly, the latter wasn't the case for Activision's v-buffer implementation. They claim a significant speedup when using it on foliage.

Neural Filters We’ve recently seen several examples of how neural filters can be used to enhance game visuals. What is your take on these approaches? Will such technologies change gamedev in the near future?

Peter Sikachev: This is something very hard to speculate about for me. I am not really familiar with how expensive those filters are, in terms of performance. If artists will really become keen on them, we might look into whether it's possible to approximate them with cheap lookup tables.

Conclusion What is the next big step for game visuals in your opinion? What is the main bottleneck: vegetation, liquids, fog, something else? What limits devs at this point?

Peter Sikachev: That is a very interesting topic! From the obvious perspective, we will be definitely embracing ray-tracing and evaluating with which of its applications we get the best bang for a buck and what will be affordable on the current-gen hardware. Fluid simulation is a topic that is near and dear to my heart, and while that might not be the lowest hanging fruit to grab, in my opinion, that could help us to bridge the gap between the visual fidelity of solid geometry, that we could render almost perfectly now, and water/particles, which always have been second-class citizens.

Transparent geometry rendering, in general, could be a very interesting topic. We still don't have a great consistent solution for it, despite the abundance of open-world games set in modern or futuristic megapolises. Since rasterization has been a poor fit for transparencies, it will be curious to see if we will be able to leverage ray tracing for it.

Peter Sikachev, Principal Graphics Programmer

Interview conducted by Arti Sergeev

Join discussion

Comments 0

    You might also like

    We need your consent

    We use cookies on this website to make your browsing experience better. By using the site you agree to our use of cookies.Learn more