Neural Network Ambient Occlusion
Subscribe:  iCal  |  Google Calendar
Milwaukee US   17, Jun — 22, Jun
New York US   17, Jun — 20, Jun
St. Petersburg RU   19, Jun — 21, Jun
TBA BR   22, Jun — 24, Jun
Amsterdam NL   25, Jun — 28, Jun
Latest comments
by Charlotte Delannoy
9 hours ago

Thanks a lot ! Did you give some masterclass of something ?

by Hun Young Ha
12 hours ago

How is the Clovers sit on top between tiles? for mine, blend modes doesnt seem to be working... they follow the height of the tiles which results in extreme distortion of clovers following the height changes of tiles

by Gary Sanchez
13 hours ago

I really liked Cris Tales, its a Colombian game, i really like it how it looks, its like a old JRPG with a unique graphic style:

Neural Network Ambient Occlusion
8 May, 2017
Siggraph 2017 is right around the corner, promising a lot of exciting stuff. The biggest stars of this show are the amazing researchers, who find new ways to build and render 3d content. However, sometimes, there’s just so many talks out there, that it’s getting hard to get hold of everything. Only today have we learned about the amazing research, which was actually shown in 2016 at Siggraph Asia in Macao. This paper talks about new ways to use neural networks to calculate ambient occlusion.

It’s a very fresh and interesting look into the future of technical art and how AI revolution might influence the way we approach 3d. The authors Daniel Holden, Jun Saito and Taku Komura (The University of Edinburgh and Method Studios) have turned to machine learning because this tech provides two biggest benefits – gives a possibility to do things faster and more accurately.

So that you know the methodology of this research was very interesting. The scientists took scenes from Black Mesa FPS and rendered them offline with Mental Ray. You can read more about it in the released presentation file. While results are not universally groundbreaking, the proposed approach does prove to be faster and more accurate in many cases than previous methods.

What this means for artists is that instead of simulating an entire physical process like AO computers are “learning” what makes it “look” the way it does.  Which can be cheaper to simulate while also being more accurate.  Not too mention the opportunities for interface simplification and tool accessibility that it provides.

Andrew Maximov

This year the same team is going to show how neural networks could be used to animate the characters. We talked briefly about this tech here.

Leave a Reply