4 Technologies To Change Art Production
Events
Subscribe:  iCal  |  Google Calendar
Kyiv UA   22, Sep — 23, Sep
Valletta MT   23, Sep — 29, Sep
24, Sep — 27, Sep
Tokyo JP   25, Sep — 27, Sep
San Diego US   27, Sep — 30, Sep
Latest comments

Then stop scrolling. My Florida Green’s doctor can help get you registered and lead you towards your medicine. Enjoy this break from social media with a newly legalized joint. Contact My Florida Green today to ‘get legal’ with your medical marijuana card in Sarasota.

Thats really cool talk :)

Wow it's so refreshing to see projects inspired by serious cinema and even more literature, most 3D artist I know probably never heard of Tarkovsky and wouldn't go through an Art film that is "foreign", 2.5 hours and has really slow shots. It's a shame, there's so much inspiration out there waiting to be taken from all the brilliant XX century Masters of cinema... Keep up the good work, I really hope to see more stuff from you.

4 Technologies To Change Art Production
6 April, 2017
Report
During GDC 2017 at the Art Direction Bootcamp, Andrew Maximov, Lead Technical Artist at Naughty Dog, gave a talk on the future of art production for video games. It was a prognosis, detailing four of the top technological advancements that are going to drastically change our approach to game development.

These changes cause a lot of fear in the artistic community. I know this as I interact with that community every day. The introduction of 3D scanning, simulation, and procedural generation is changing the way we treat game development. These new elements may remove the need for parts of the game production pipeline that have been standard practice for ages. Megascans and SpeedTree already are taking away jobs from foliage artists. Environment artists can now scan entire buildings in a day. And developers used Houdini to build an entire town in Ghost Recon: Wildlands. Technology is changing the way we live. It’s changing the way we work. And if you really want to freak out about it, read Nick Bostrom’s book Superintelligence. But, we’re not about to dive into deep philosophical discussions—we’ll leave that to Elon Musk.

Instead, we’ll discuss Andrew Maximov’s informative talk while adding our own commentary on the subject as well as naming a handful of companies that already are influencing the way we treat production today. In doing so, we hope to lessen a community’s fear about the future and demonstrate that there is a light at the end of the tunnel for those working within the video game industry.

Optimization Automation

Optimization is a fairly common struggle for game developers. Game artists have been grappling with technical restrictions for ages. Back in the NES days, color itself was a technical resource. It had to be carefully managed because older hardware was unable to visualize many colors on a screen at one time. If you want to check out how a game artist’s tools looked back then, take a look at the Sega Digitizer System.

Plenty of compromises had to be made back in the day when these technical restrictions were largely prevalent throughout the industry. Today, color is no longer a technical issue. But, this poses a question: What other aspects of our game production pipeline will become optimized in the future?

There are many aspects of the game production pipeline that look to be going away: Manual Low to High Poly, UV Unwrap, LoDs, and Collision. In the future, games will display everything and anything game developers want to portray on a screen.

Many of those items already are being automated today. Developers are automating the level of details and improving UV unwraps. The more this happens, the faster chunks of the pipeline will become obsolete. And frankly, we believe this it a beneficial trend moving forward because these processes have very little artistic value.

If you’re interested in learning more about the ways technology changes the game production pipeline, feel free to listen to lectures by Michael Pavlovich and Martin Thorzen. These technical artists can teach you plenty about the way tools make game production easier.

Capturing Reality

Capturing reality is nothing new in the world of video games (remember the original Prince of Persia?) but has become a bit controversial within the industry.

Back in 1986, Jordan Mechner, the creator of Prince of Persia, and his brother went outside to snag something other than fresh air. Mechner captured his brother running around a parking lot with a video camera, and then he rotoscoped the footage pixel by pixel to paint what he had captured into the game.

Thus, the concept behind all those new up-and-coming scanning techniques are something the industry has been familiar with for quite some time.

Max Payne (2001) utilized facial scan techniques and Sam Lake’s face model for the titular character with amazing results. Today, these scans can be applied to a character’s entire body—that’s how Norman Reedus and Guillermo del Toro ended up in Death Stranding!