logo80lv
Articlesclick_arrow
Research
Talentsclick_arrow
Events
Workshops
Aboutclick_arrow
profile_loginLogIn

4 Technologies To Change Art Production

Andrew Maximov discussed the inevitable changes coming to artistic pipelines and explained how they will impact the market and artistic jobs in gamedev.

During GDC 2017 at the Art Direction Bootcamp, Andrew Maximov, Lead Technical Artist at Naughty Dog, gave a talk on the future of art production for video games. It was a prognosis, detailing four of the top technological advancements that are going to drastically change our approach to game development.

These changes cause a lot of fear in the artistic community. I know this as I interact with that community every day. The introduction of 3D scanning, simulation, and procedural generation is changing the way we treat game development. These new elements may remove the need for parts of the game production pipeline that have been standard practice for ages. Megascans and SpeedTree already are taking away jobs from foliage artists. Environment artists can now scan entire buildings in a day. And developers used Houdini to build an entire town in Ghost Recon: Wildlands. Technology is changing the way we live. It’s changing the way we work. And if you really want to freak out about it, read Nick Bostrom’s book Superintelligence. But, we’re not about to dive into deep philosophical discussions—we’ll leave that to Elon Musk.

Instead, we’ll discuss Andrew Maximov’s informative talk while adding our own commentary on the subject as well as naming a handful of companies that already are influencing the way we treat production today. In doing so, we hope to lessen a community’s fear about the future and demonstrate that there is a light at the end of the tunnel for those working within the video game industry.

Optimization Automation

Optimization is a fairly common struggle for game developers. Game artists have been grappling with technical restrictions for ages. Back in the NES days, color itself was a technical resource. It had to be carefully managed because older hardware was unable to visualize many colors on a screen at one time. If you want to check out how a game artist’s tools looked back then, take a look at the Sega Digitizer System.

Plenty of compromises had to be made back in the day when these technical restrictions were largely prevalent throughout the industry. Today, color is no longer a technical issue. But, this poses a question: What other aspects of our game production pipeline will become optimized in the future?

There are many aspects of the game production pipeline that look to be going away: Manual Low to High Poly, UV Unwrap, LoDs, and Collision. In the future, games will display everything and anything game developers want to portray on a screen.

Many of those items already are being automated today. Developers are automating the level of details and improving UV unwraps. The more this happens, the faster chunks of the pipeline will become obsolete. And frankly, we believe this it a beneficial trend moving forward because these processes have very little artistic value.

If you’re interested in learning more about the ways technology changes the game production pipeline, feel free to listen to lectures by Michael Pavlovich and Martin Thorzen. These technical artists can teach you plenty about the way tools make game production easier.

Capturing Reality

Capturing reality is nothing new in the world of video games (remember the original Prince of Persia?) but has become a bit controversial within the industry.

Back in 1986, Jordan Mechner, the creator of Prince of Persia, and his brother went outside to snag something other than fresh air. Mechner captured his brother running around a parking lot with a video camera, and then he rotoscoped the footage pixel by pixel to paint what he had captured into the game.

Thus, the concept behind all those new up-and-coming scanning techniques are something the industry has been familiar with for quite some time.

Max Payne (2001) utilized facial scan techniques and Sam Lake’s face model for the titular character with amazing results. Today, these scans can be applied to a character’s entire body—that’s how Norman Reedus and Guillermo del Toro ended up in Death Stranding!

DICE is one of the first major companies to regularly utilize photogrammetry on large scale productions. In doing so, DICE cuts down the development production time and overall cost of game asset creation. Battlefield 1 and Star Wars Battlefront were mainly produced with photo scanning techniques. The company covers this development process extensively in its official blog. Kenneth Brown and Andrew Hamilton’s talk from GDC in 2016 also highlights the influence and importance of photogrammetry—by turning to photogrammetry, they managed to cut down the production time of Star Wars Battlefront in half (and more with automation!).

Technically, there’s nothing in the world that we can’t scan! We can scan humans, animals, organic environments, you name it—all we need is enough pictures of the object for a proper scan. Reflective surfaces have presented the biggest problem thus far but there are some particular setups and special photogrammetry sprays (which can be used as a coating for the object) that help solve this issue. It’s possible to scan entire environments with photo-realistic results. The only limitation here is memory, which is only going to grow. Over time, game developers will be able to extensively exercise this technology.

Ready At Dawn did an astonishing job of material scanning for The Order: 1886. You can learn more about it from a video presentation conducted by Jo Watanabe and Brandi Parish.

Clearly, there are concerns and questions that still revolve around photogrammetry. It’s clunky to implement into a regular development pipeline and game developers won’t always have the resources to capture images from different locations around the world.

By the time photogrammetry reaches mass penetration within the industry, however, most of these issues will be resolved. Fundamentally, the familiar workflow of producing everything by scratch is changing because we now have the technology to scan objects and place them into video games. With this technology, we’ll start treating the world as movie directors, using the world around us as one giant movie set. You’ll be able to light an object, dress a person, or modify a structure, but you’ll still have to create and communicate art at the end of the day. Put simply, the art of video games isn’t going anywhere. Look at Blade Runner for a similar example from the film industry! It heavily leaned on real-life Los Angeles but was actually perceived as a cyberpunk masterpiece.

Another reason why photogrammetry will become a major part of our game production pipeline is because of its reasonable cost. It’s going to gain traction because it’s much cheaper to use an existing scan of an entire environment rather than produce one by hand.

A great way to check out photogrammetry and scanned materials is by getting a little glimpse of both through Megascans. Megascans has amassed a huge set of materials and vegetation, which you can use in game production. If you want to gain an in-depth look at scanning, please check out these talks by Oskar Edlund, James Busby, and James Candy in Introduction to 3D Scanning.

Parametrization, Simulation, and Generation

The world contains a variety of organic systems, which can be computer simulated using existing technologies. Two years ago, Epic Games made a simulation of a forest’s ground for its Kite demo. That system places rocks, bushes, and other expected forest objects within an environment as well as operates according to species rules that determine overlaps: relative size, shade, altitude, slope, and more. Learn more about this production from a talk Epic Games gave at GDC in 2015.

Computer systems can simulate natural processes to generate art. Developers from Horizon Zero Dawn took this a bit further by making an amazing system that could simulate the game’s world just as if it was built by an artist. Learn more about this technology in one of our 80.lv reports. If you found that interesting and want more, be sure to check out this video of procedural asset placement in Decima Engine.

Looking at SpeedTree, one can see that we’re moving away from working with individual vertices or polygons as virtual objects to working with them as if they were real objects. We want to control an object’s height and density. With Substance Painter, we’re no longer working on pixels and textures but rather applying materials and brush strokes with a keen eye to realism.

The same approach is taken with respects to physicality. We treated objects as if they were virtual, but further down the line, we’ve started treating our objects as things we would expect to pluck from the real world. We want to populate our worlds with objects that have physics and interact with each other in dynamic manners. This allows us to create more credible worlds while also saving precious time. Why bother wasting time sculpting every single fold on a curtain and worry about how each will move when you can produce this effect with a physics simulation?

Another clear choice for implementing this technology is with in-game characters. Imagine a situation where we scan a bunch of people and capture their facial features. Then, we blend between various extreme features and automate the representation of characters’ faces based off this diverse range of facial features. Ten years from now, it wouldn’t be surprising if most indie studios were using an automated system akin to this. Black Desert already lets players create believable and interesting characters in a modularized and procedural manner.

We’re going to reach a point within the industry when knowledge of computers is unnecessary because individuals will be able to interact with virtual objects just as they would with real world objects. And this is not just an art-related issue as it’s also occurring with programming. Blueprints in Unreal allow people to build entire games without a programmer. These technologies are building a comprehensive functionality that helps people express their intents faster and with fewer complicated layers of execution.

AI Assistance and Machine Learning

Like many other previous concepts discussed within this article, AI assistance is nothing new to the video game industry. We’ve all been using AI-assisted technology—it’s been integrated with consumer goods, like your smartphone, for quite some time.

Google’s new messaging app, Google Allo, is connected to a neural network. It tracks answers and remembers how you replied to questions. In the end, the app proposes intelligent answers, and we’re going to see similar approaches to this in the world of game development soon.

It’s easy to imagine a situation where a neural net is going to analyze your previous game and all the art choices you’ve made. Then, the neural net will provide you some solutions, which you can utilize, modify, or ignore. Google has already developed deep-learning neural nets that have been used for image recognition.

When you search for an image in Google, the neural net scans the image and displays words that match your search pattern. But, for this new system, the company turned the neural net around. When you put words into it, the neural net produces images. This basically means that the system is capable of some form of creativity, driven by the information humans fed it. This is not the stuff of science fiction—it’s already being used in games!

During SIGGRAPH 2016, the guys from Remedy talked about the way they’ve used a neural-based animation solver. They taught a neural net to convert a raw video into an almost game-ready animation. In general, deep learning has become a much bigger part of animation solvers today. And these systems are closer to being common than we all think. These systems will still present game developers choices and artistic control, but they will take away much of the manual labor associated with our professions.

During the same panel, Frostbite talked about similar concepts. Tim Sweeney of Epic Games is also a big aficionado of deep learning. As a matter of fact, someone at Epic Games is probably investigating this as you’re reading our article!

A huge leap forward for applying deep learning systems to game art was made in the area of texture generation. A number of companies worked hard to make this improvement. Recently, NVIDIA announced a closed beta for a new solution to help game artists that solves several problems with procedural textures. Artomatix is another big company working in this direction. 80.lv has witnessed a number of demos at GDC from the company which allows game artists to rebuild hundreds of textures in a matter of seconds, upgrade textures to higher requirements, or build a stylized version of textures in an easy fashion. We believe Allegorithmic will be the next big player to support deep learning as a part of the growing family of Substance tools (there hasn’t been an official announcement from the team though).

So, What’s Left?

3D scanning, neural nets, and smart algorithms are all great. But, where do people factor into this equation? What’s left of the artistic process? At the moment, when developers have a creative problem they come up with a solution and execute it, which can be a time consuming process. However, with all this new technology on board, the execution process will become much faster. Hopefully, these technologies will get developers from a general idea to executing a final product as quickly as possible.

Before

After

The core skills we rely on as artists are still as valuable and as essential as before. Still, certain skills that surround this imperfect hardware or software might become redundant when the hardware and software systems are perfected with time.

Intent will act as the basic building block of any art production or interface. We’ll have to build tools that facilitate creating content and objects that are more real than ever before. We’ll treat the content as something from the real world, and we’re going to interact with that content just as we would interact with it in the real world. This actually feeds very well into the artistic process as well.

How might it all look in the future? You’ll have your high-level intent where you’ll generate an entire game’s space. And you’ll also have regional intent, where you’ll just point to a certain place and pick a biome for your trees to grow. Also, there would be an object-level intent where you would go in and modify particular elements, such as changing color, adding wear, and so on.

This is similar to the way movies operate. One wouldn’t say movies aren’t artistic or that they lack meaning—every shot in a good movie plays some role in contributing to the creation of an overall purposeful work of art. At the end of the day, it’s not so much about how a piece of art is created but rather how a piece of art impacts an individual in a meaningful manner.

In essence, creating art is a process of assigning meaning. No matter how optimized the creative process becomes, the artist’s intent and meaning is still crucially important. Game developers still have to think about a message, theme, or set of emotions they want to communicate to the audience in an effective manner. If a work of art doesn’t resonate with people, it won’t be successful as there’s nothing connecting the viewer or player to the work of art.

There still is a bigger question that needs to be answered: How will these changes impact game art production in the video game industry? Once again, film acts as a fine industry for reference. The film industry has gone through a technological revolution that democratized production as well. In the past, it was far more time consuming, laborious, and expensive to create a film. But, most people have a digital camera or a smartphone today, meaning more people have the opportunity to create films in a rather inexpensive and convenient fashion.

For the video game industry, it means the following things:

1) Cheaper Production

Why do you need to model everything when you can just import plenty of scanned locations and polish them up? The same will happen with animation. Maybe in a couple of years, we’ll have a holographic phone that captures the likeness of real people and will immediately create characters in-game based off this likeness.

2) Smaller Teams

We’ll need smaller teams. Game developers could do more with less people, meaning there’s going to be a heightened need for generalists. We are used to specialized roles but at some point in the future it’s not going to make sense to follow this trajectory. At some point, everything is will be automated and we’ll be able to execute more artistry rather than focus on technical communications.

3) Production Times

Production times didn’t change that much in the movie industry after it experienced the previously mentioned tech revolution. Good stories and strong gameplay will still take time to mature. You might be able to produce an entire game world in three days but that doesn’t mean you’ll have a great game. You’ll still have to iterate, playtest, and hone in-game systems, which take large portions of time to accomplish.

4) Photo-Realism is the New Indie

At the moment, it’s cheaper to create more stylized games. But, this fact will change in the near future. By that time, it also will most likely be dirt-cheap to create games with photo-realistic graphics.

5) Jobs Diluted, Not Lost

I don’t believe many jobs will be lost as a result of this impending technological revolution. A similar event happened in the movie industry and it didn’t shrink too much. On the contrary, more value and movies have been generated since then because it’s far simpler to create than in the past. I’m convinced that the same trend will unfold in the video game industry. Currently, game making is a very technical process. Eventually, it won’t be such a technical process and more people will have a chance to express themselves through creating video games.

Technology is just a tool. Every day I come home, and I do some modeling and texturing. I realize that I might not have that much time to do this anymore. But, if this means that so many others will get a voice, then I’m all for it. We’re not inspired by someone doing UVs. We’re inspired by stories that profoundly changed us within virtual worlds where we want to visit, explore, and sometimes live within. This is what an advancement in technology will grant us. Yes, we’re going to lose some things. Yet, I can’t wait for this future to come.

Andrew Maximov, Lead Technical Artist, Naughty Dog

Join discussion

Comments 6

  • Chalfant

    These are all great tools that just get us closer to making great art faster.

    0

    Chalfant

    ·7 years ago·
  • Anonymous user

    I don't think environment art will go. There's only so much you can scan. Virtual worlds are not like real worlds. Scanning makes realistic modeling and sculpting obsolete, yes. Especially, vehicles. I think these could just be scanned and that's it. I'm working on some talks with Wargaming - maybe their guys will have more to say about this problem.

    0

    Anonymous user

    ·7 years ago·
  • paul fedor

    His speech about "How is the water"? is such a perfect analogy.

    Its funny I been in the business so long ive seen it change from film to digital to realtime. Back in the day you would paint a practical set......with a fresh coat of paint. The scenic artists would come in with tooth brushes for flickered dirt or a spray bottle mixed with water and acrylic and sponges to shape the spray or add simulated age or wear...a process known as "Streaks and Tips"......then they would add a coat of varnish to bring out a shiny highlight....or sand paper an aged spot.

    The digtial scenics now traded in their spray bottle dirt for a procedural roughness layer in Subsantance painter......and adding a highlight in realtime with a metal shader....that would take hours to dry if it where real varnish. They traded in latex molds for Sub Surface Scattering Shaders..and handed in the old compressor airbrushes for realtime PBRs.

    The history of how we got to 2017 tech is a great lesson.

    Thanks for post

    0

    paul fedor

    ·7 years ago·
  • Anonymous user

    I think it always comes back to basics. If you're a good artist, you'll be good with a brush or Megascans. It doesn't matter. The tools are just tools, they are here to help you. What worries me is that art production might get scaled down A LOT. And eventually a lot fo people will be out of jobs (at least for some time).

    0

    Anonymous user

    ·7 years ago·
  • asdas

    This is great stuff, while i cant wait for alot of the stuff to be more optimized and automated, its still important to know why you are doing what you are doing.

    0

    asdas

    ·7 years ago·
  • ALX

    Great article, Andrew. As more tools will appear to make realistic games easier to make, I think other tools for making stylized games will appear too. Maybe some of them will apply filters/shaders/styles techniques to a realistic/scanned photo to make it low-poly for example, or to mix it with another art style (like Deepart.io).

    0

    ALX

    ·7 years ago·

You might also like

We need your consent

We use cookies on this website to make your browsing experience better. By using the site you agree to our use of cookies.Learn more