Andrew Maximov discussed the inevitable changes coming to artistic pipelines and explained how they will impact the market and artistic jobs in gamedev.
During GDC 2017 at the Art Direction Bootcamp, Andrew Maximov, Lead Technical Artist at Naughty Dog, gave a talk on the future of art production for video games. It was a prognosis, detailing four of the top technological advancements that are going to drastically change our approach to game development.
These changes cause a lot of fear in the artistic community. I know this as I interact with that community every day. The introduction of 3D scanning, simulation, and procedural generation is changing the way we treat game development. These new elements may remove the need for parts of the game production pipeline that have been standard practice for ages. Megascans and SpeedTree already are taking away jobs from foliage artists. Environment artists can now scan entire buildings in a day. And developers used Houdini to build an entire town in Ghost Recon: Wildlands. Technology is changing the way we live. It’s changing the way we work. And if you really want to freak out about it, read Nick Bostrom’s book Superintelligence. But, we’re not about to dive into deep philosophical discussions—we’ll leave that to Elon Musk.
Instead, we’ll discuss Andrew Maximov’s informative talk while adding our own commentary on the subject as well as naming a handful of companies that already are influencing the way we treat production today. In doing so, we hope to lessen a community’s fear about the future and demonstrate that there is a light at the end of the tunnel for those working within the video game industry.
Optimization Automation
Optimization is a fairly common struggle for game developers. Game artists have been grappling with technical restrictions for ages. Back in the NES days, color itself was a technical resource. It had to be carefully managed because older hardware was unable to visualize many colors on a screen at one time. If you want to check out how a game artist’s tools looked back then, take a look at the Sega Digitizer System.
Plenty of compromises had to be made back in the day when these technical restrictions were largely prevalent throughout the industry. Today, color is no longer a technical issue. But, this poses a question: What other aspects of our game production pipeline will become optimized in the future?
There are many aspects of the game production pipeline that look to be going away: Manual Low to High Poly, UV Unwrap, LoDs, and Collision. In the future, games will display everything and anything game developers want to portray on a screen.
Many of those items already are being automated today. Developers are automating the level of details and improving UV unwraps. The more this happens, the faster chunks of the pipeline will become obsolete. And frankly, we believe this it a beneficial trend moving forward because these processes have very little artistic value.
Capturing Reality
Capturing reality is nothing new in the world of video games (remember the original Prince of Persia?) but has become a bit controversial within the industry.
Back in 1986, Jordan Mechner, the creator of Prince of Persia, and his brother went outside to snag something other than fresh air. Mechner captured his brother running around a parking lot with a video camera, and then he rotoscoped the footage pixel by pixel to paint what he had captured into the game.
Thus, the concept behind all those new up-and-coming scanning techniques are something the industry has been familiar with for quite some time.
Max Payne (2001) utilized facial scan techniques and Sam Lake’s face model for the titular character with amazing results. Today, these scans can be applied to a character’s entire body—that’s how Norman Reedus and Guillermo del Toro ended up in Death Stranding!
Technically, there’s nothing in the world that we can’t scan! We can scan humans, animals, organic environments, you name it—all we need is enough pictures of the object for a proper scan. Reflective surfaces have presented the biggest problem thus far but there are some particular setups and special photogrammetry sprays (which can be used as a coating for the object) that help solve this issue. It’s possible to scan entire environments with photo-realistic results. The only limitation here is memory, which is only going to grow. Over time, game developers will be able to extensively exercise this technology.
Clearly, there are concerns and questions that still revolve around photogrammetry. It’s clunky to implement into a regular development pipeline and game developers won’t always have the resources to capture images from different locations around the world.
By the time photogrammetry reaches mass penetration within the industry, however, most of these issues will be resolved. Fundamentally, the familiar workflow of producing everything by scratch is changing because we now have the technology to scan objects and place them into video games. With this technology, we’ll start treating the world as movie directors, using the world around us as one giant movie set. You’ll be able to light an object, dress a person, or modify a structure, but you’ll still have to create and communicate art at the end of the day. Put simply, the art of video games isn’t going anywhere. Look at Blade Runner for a similar example from the film industry! It heavily leaned on real-life Los Angeles but was actually perceived as a cyberpunk masterpiece.
Another reason why photogrammetry will become a major part of our game production pipeline is because of its reasonable cost. It’s going to gain traction because it’s much cheaper to use an existing scan of an entire environment rather than produce one by hand.
Parametrization, Simulation, and Generation
The world contains a variety of organic systems, which can be computer simulated using existing technologies. Two years ago, Epic Games made a simulation of a forest’s ground for its Kite demo. That system places rocks, bushes, and other expected forest objects within an environment as well as operates according to species rules that determine overlaps: relative size, shade, altitude, slope, and more. Learn more about this production from a talk Epic Games gave at GDC in 2015.
Looking at SpeedTree, one can see that we’re moving away from working with individual vertices or polygons as virtual objects to working with them as if they were real objects. We want to control an object’s height and density. With Substance Painter, we’re no longer working on pixels and textures but rather applying materials and brush strokes with a keen eye to realism.
The same approach is taken with respects to physicality. We treated objects as if they were virtual, but further down the line, we’ve started treating our objects as things we would expect to pluck from the real world. We want to populate our worlds with objects that have physics and interact with each other in dynamic manners. This allows us to create more credible worlds while also saving precious time. Why bother wasting time sculpting every single fold on a curtain and worry about how each will move when you can produce this effect with a physics simulation?
Another clear choice for implementing this technology is with in-game characters. Imagine a situation where we scan a bunch of people and capture their facial features. Then, we blend between various extreme features and automate the representation of characters’ faces based off this diverse range of facial features. Ten years from now, it wouldn’t be surprising if most indie studios were using an automated system akin to this. Black Desert already lets players create believable and interesting characters in a modularized and procedural manner.
We’re going to reach a point within the industry when knowledge of computers is unnecessary because individuals will be able to interact with virtual objects just as they would with real world objects. And this is not just an art-related issue as it’s also occurring with programming. Blueprints in Unreal allow people to build entire games without a programmer. These technologies are building a comprehensive functionality that helps people express their intents faster and with fewer complicated layers of execution.
AI Assistance and Machine Learning
Like many other previous concepts discussed within this article, AI assistance is nothing new to the video game industry. We’ve all been using AI-assisted technology—it’s been integrated with consumer goods, like your smartphone, for quite some time.
Google’s new messaging app, Google Allo, is connected to a neural network. It tracks answers and remembers how you replied to questions. In the end, the app proposes intelligent answers, and we’re going to see similar approaches to this in the world of game development soon.
It’s easy to imagine a situation where a neural net is going to analyze your previous game and all the art choices you’ve made. Then, the neural net will provide you some solutions, which you can utilize, modify, or ignore. Google has already developed deep-learning neural nets that have been used for image recognition.
When you search for an image in Google, the neural net scans the image and displays words that match your search pattern. But, for this new system, the company turned the neural net around. When you put words into it, the neural net produces images. This basically means that the system is capable of some form of creativity, driven by the information humans fed it. This is not the stuff of science fiction—it’s already being used in games!
During SIGGRAPH 2016, the guys from Remedy talked about the way they’ve used a neural-based animation solver. They taught a neural net to convert a raw video into an almost game-ready animation. In general, deep learning has become a much bigger part of animation solvers today. And these systems are closer to being common than we all think. These systems will still present game developers choices and artistic control, but they will take away much of the manual labor associated with our professions.
During the same panel, Frostbite talked about similar concepts. Tim Sweeney of Epic Games is also a big aficionado of deep learning. As a matter of fact, someone at Epic Games is probably investigating this as you’re reading our article!
So, What’s Left?
3D scanning, neural nets, and smart algorithms are all great. But, where do people factor into this equation? What’s left of the artistic process? At the moment, when developers have a creative problem they come up with a solution and execute it, which can be a time consuming process. However, with all this new technology on board, the execution process will become much faster. Hopefully, these technologies will get developers from a general idea to executing a final product as quickly as possible.
Before
After
The core skills we rely on as artists are still as valuable and as essential as before. Still, certain skills that surround this imperfect hardware or software might become redundant when the hardware and software systems are perfected with time.
Intent will act as the basic building block of any art production or interface. We’ll have to build tools that facilitate creating content and objects that are more real than ever before. We’ll treat the content as something from the real world, and we’re going to interact with that content just as we would interact with it in the real world. This actually feeds very well into the artistic process as well.
How might it all look in the future? You’ll have your high-level intent where you’ll generate an entire game’s space. And you’ll also have regional intent, where you’ll just point to a certain place and pick a biome for your trees to grow. Also, there would be an object-level intent where you would go in and modify particular elements, such as changing color, adding wear, and so on.
This is similar to the way movies operate. One wouldn’t say movies aren’t artistic or that they lack meaning—every shot in a good movie plays some role in contributing to the creation of an overall purposeful work of art. At the end of the day, it’s not so much about how a piece of art is created but rather how a piece of art impacts an individual in a meaningful manner.
In essence, creating art is a process of assigning meaning. No matter how optimized the creative process becomes, the artist’s intent and meaning is still crucially important. Game developers still have to think about a message, theme, or set of emotions they want to communicate to the audience in an effective manner. If a work of art doesn’t resonate with people, it won’t be successful as there’s nothing connecting the viewer or player to the work of art.
There still is a bigger question that needs to be answered: How will these changes impact game art production in the video game industry? Once again, film acts as a fine industry for reference. The film industry has gone through a technological revolution that democratized production as well. In the past, it was far more time consuming, laborious, and expensive to create a film. But, most people have a digital camera or a smartphone today, meaning more people have the opportunity to create films in a rather inexpensive and convenient fashion.
For the video game industry, it means the following things:
1) Cheaper Production
Why do you need to model everything when you can just import plenty of scanned locations and polish them up? The same will happen with animation. Maybe in a couple of years, we’ll have a holographic phone that captures the likeness of real people and will immediately create characters in-game based off this likeness.
2) Smaller Teams
We’ll need smaller teams. Game developers could do more with less people, meaning there’s going to be a heightened need for generalists. We are used to specialized roles but at some point in the future it’s not going to make sense to follow this trajectory. At some point, everything is will be automated and we’ll be able to execute more artistry rather than focus on technical communications.
3) Production Times
Production times didn’t change that much in the movie industry after it experienced the previously mentioned tech revolution. Good stories and strong gameplay will still take time to mature. You might be able to produce an entire game world in three days but that doesn’t mean you’ll have a great game. You’ll still have to iterate, playtest, and hone in-game systems, which take large portions of time to accomplish.
4) Photo-Realism is the New Indie
At the moment, it’s cheaper to create more stylized games. But, this fact will change in the near future. By that time, it also will most likely be dirt-cheap to create games with photo-realistic graphics.
5) Jobs Diluted, Not Lost
I don’t believe many jobs will be lost as a result of this impending technological revolution. A similar event happened in the movie industry and it didn’t shrink too much. On the contrary, more value and movies have been generated since then because it’s far simpler to create than in the past. I’m convinced that the same trend will unfold in the video game industry. Currently, game making is a very technical process. Eventually, it won’t be such a technical process and more people will have a chance to express themselves through creating video games.
Technology is just a tool. Every day I come home, and I do some modeling and texturing. I realize that I might not have that much time to do this anymore. But, if this means that so many others will get a voice, then I’m all for it. We’re not inspired by someone doing UVs. We’re inspired by stories that profoundly changed us within virtual worlds where we want to visit, explore, and sometimes live within. This is what an advancement in technology will grant us. Yes, we’re going to lose some things. Yet, I can’t wait for this future to come.