I have the utmost respect for each of these developers. I must say I think they’re mostly incorrect in their assessments of why the Dreamcast failed. The Dreamcast’s ultimate failure had so little to do with the way Sega handled the Dreamcast. Sega and their third party affiliates such as Namco and Capcom put out so many games of such stellar quality, that the Dreamcast won over a generation of gamers who had previously been diehard Nintendo or Sony fans. They even won me over, who had been a diehard Sega fan since the SMS days, but was so disillusioned by the Saturn’s handling that I had initially decided to sit the Dreamcast out. At that time, the Dreamcast launch was widely considered to be the strongest console launch in US history. In my opinion, the three issues leading to the fall of the Dreamcast were (in inverse order):1)piracy, 2)Sega’s great deficit of finances and cachet following the Saturn debacle, and 3)Sony’s masterful marketing of the PlayStation 2. Piracy’s effect on Dreamcast sales is a hotly debated topic, but I’ll say that the turn of the millennium, most college and post-college guys I knew pirated every bit of music or software they could. Regarding the Saturn debacle, the infighting between SOA and SOJ is well known, as are the number of hubristic decisions Mr. Nakayama made which left Sega in huge financial deficit. They were also directly responsible for erasing a lot of the respect and good will Sega had chiseled out worldwide during the Mega Drive/Genesis era. With the Dreamcast, Sega was digging itself out of a hole. They had seemingly done it as well, and would have surely continued along that path, had it not been for the PS2. There is no doubt in my mind that the overwhelming reason the Dreamcast failed was because of the PS2.
Great stuff Fran!
What the hell are you saying? I can't make sense of it.
During GDC 2017 at the Art Direction Bootcamp, Andrew Maximov, Lead Technical Artist at Naughty Dog, gave a talk on the future of art production for video games. It was a prognosis, detailing four of the top technological advancements that are going to drastically change our approach to game development.
These changes cause a lot of fear in the artistic community. I know this as I interact with that community every day. The introduction of 3D scanning, simulation, and procedural generation is changing the way we treat game development. These new elements may remove the need for parts of the game production pipeline that have been standard practice for ages. Megascans and SpeedTree already are taking away jobs from foliage artists. Environment artists can now scan entire buildings in a day. And developers used Houdini to build an entire town in Ghost Recon: Wildlands. Technology is changing the way we live. It’s changing the way we work. And if you really want to freak out about it, read Nick Bostrom’s book Superintelligence. But, we’re not about to dive into deep philosophical discussions—we’ll leave that to Elon Musk.
Instead, we’ll discuss Andrew Maximov’s informative talk while adding our own commentary on the subject as well as naming a handful of companies that already are influencing the way we treat production today. In doing so, we hope to lessen a community’s fear about the future and demonstrate that there is a light at the end of the tunnel for those working within the video game industry.
Optimization is a fairly common struggle for game developers. Game artists have been grappling with technical restrictions for ages. Back in the NES days, color itself was a technical resource. It had to be carefully managed because older hardware was unable to visualize many colors on a screen at one time. If you want to check out how a game artist’s tools looked back then, take a look at the Sega Digitizer System.
Plenty of compromises had to be made back in the day when these technical restrictions were largely prevalent throughout the industry. Today, color is no longer a technical issue. But, this poses a question: What other aspects of our game production pipeline will become optimized in the future?
There are many aspects of the game production pipeline that look to be going away: Manual Low to High Poly, UV Unwrap, LoDs, and Collision. In the future, games will display everything and anything game developers want to portray on a screen.
Many of those items already are being automated today. Developers are automating the level of details and improving UV unwraps. The more this happens, the faster chunks of the pipeline will become obsolete. And frankly, we believe this it a beneficial trend moving forward because these processes have very little artistic value.
Capturing reality is nothing new in the world of video games (remember the original Prince of Persia?) but has become a bit controversial within the industry.
Back in 1986, Jordan Mechner, the creator of Prince of Persia, and his brother went outside to snag something other than fresh air. Mechner captured his brother running around a parking lot with a video camera, and then he rotoscoped the footage pixel by pixel to paint what he had captured into the game.
Thus, the concept behind all those new up-and-coming scanning techniques are something the industry has been familiar with for quite some time.
Max Payne (2001) utilized facial scan techniques and Sam Lake’s face model for the titular character with amazing results. Today, these scans can be applied to a character’s entire body—that’s how Norman Reedus and Guillermo del Toro ended up in Death Stranding!