Paper: Disney Uses AI To Render Clouds
Subscribe:  iCal  |  Google Calendar
Las Vegas US   8, Jan — 12, Jan
Zürich CH   31, Jan — 4, Feb
Leamington Spa GB   31, Jan — 3, Feb
Bradford GB   6, Feb — 11, Feb
Bradford GB   7, Feb — 9, Feb
Latest comments
by Vitaly Lashkov
8 min ago

Ok, what about LODs and billboards in "most commonly used method"..? In comparing, Most commonly used method can help me to reach less synthetic look. And with progressive LODs keep overdraw on predictable level. Brutforce approach never works - confirmed)

честно сказать я в ахуе

finally some good news

Paper: Disney Uses AI To Render Clouds
14 November, 2017

Check out an outstanding paper about synthesizing multi-scattered illumination in clouds using deep radiance-predicting neural networks (RPNN). Simon KallweitThomas MüllerBrian McWilliamsMarkus Gross and Jan Novák from Disney combined Monte Carlo integration with data-driven radiance predictions, accurately reproducing edge-darkening effects, silverlining, and the whiteness of the inner part of the cloud.

Let’s start by watching another amazing breakdown by Two Minute Papers to understand the idea:


We present a technique for efficiently synthesizing images of atmospheric clouds using a combination of Monte Carlo integration and neural networks. The intricacies of Lorenz-Mie scattering and the high albedo of cloud-forming aerosols make rendering of clouds–e.g. the characteristic silverlining and the ‘whiteness’ of the inner body–challenging for methods based solely on Monte Carlo integration or diffusion theory. We approach the problem differently. Instead of simulating all light transport during rendering, we pre-learn the spatial and directional distribution of radiant flux from tens of cloud exemplars. To render a new scene, we sample visible points of the cloud and, for each, extract a hierarchical 3D descriptor of the cloud geometry with respect to the shading location and the light source. The descriptor is input to a deep neural network that predicts the radiance function for each shading configuration. We make the key observation that progressively feeding the hierarchical descriptor into the network enhances the network’s ability to learn faster and predict with higher accuracy while using fewer coefficients. We also employ a block design with residual connections to further improve performance. A GPU implementation of our method synthesizes images of clouds that are nearly indistinguishable from the reference solution within seconds to minutes. Our method thus represents a viable solution for applications such as cloud design and, thanks to its temporal stability, for high-quality production of animated content.

The paper “Deep Scattering: Rendering Atmospheric Clouds with Radiance-Predicting Neural Networks” and some additional files are available here.

Leave a Reply

Be the First to Comment!