Denoising with Kernel Prediction and Asymmetric Loss Functions
Events
Subscribe:  iCal  |  Google Calendar
7, Mar — 1, Jun
York US   26, Mar — 29, Mar
Boston US   28, Mar — 1, Apr
Anaheim US   29, Mar — 1, Apr
RALEIGH US   30, Mar — 1, Apr
Latest comments
by best essay writing service
2 hours ago

I read your first blog this is pleasant and stunning it has numerous enlightening articles I take after this blog to peruse more articles in future. I recommend this blog to my companions you have completed a decent work I wish you good luck for future. https://triumphessays.com/

by best essay writing service
2 hours ago

I read your first blog this is pleasant and stunning it has numerous enlightening articles I take after this blog to peruse more articles in future. I recommend this blog to my companions you have completed a decent work I wish you good luck for future.

Even Top Notch Artists will be replaced by AI. You have no idea what you are talking about. If you do, only very superficial. At the end you are only an employee. You dont have any contact or experience to the High End Echelons we worked on. In 20 years, 40% of workforce working today will be out of jobs. First we will get worldwide financial crash, then AI takes over. Admin will remember my words in not distance future.

Denoising with Kernel Prediction and Asymmetric Loss Functions
13 August, 2018
News

Check out a new paper from the Disney’s research team that introduces a modular convolutional architecture for denoising rendered images. The new method suggests mixing kernel-predicting networks with a number of task-specific modules and optimizing the assembly using an asymmetric loss. The team states that this new approach provides much better results. 

We present a modular convolutional architecture for denoising rendered images. We expand on the capabilities of kernel-predicting networks by combining them with a number of task-specific modules, and optimizing the assembly using an asymmetric loss. The source-aware encoder—the first module in the assembly—extracts low-level features and embeds them into a common feature space, enabling quick adaptation of a trained network to novel data. The spatial and temporal modules extract abstract, high-level features for kernel-based reconstruction, which is performed at three different spatial scales to reduce low-frequency artifacts. The complete network is trained using a class of asymmetric loss functions that are designed to preserve details and provide the user with a direct control over the variancebias trade-off during inference. We also propose an error-predicting module for inferring reconstruction error maps that can be used to drive adaptive sampling. Finally, we present a theoretical analysis of convergence rates of kernel-predicting architectures, shedding light on why kernel prediction performs better than synthesizing the colors directly, complementing the empirical evidence presented in this and previous works. We demonstrate that our networks attain results that compare favorably to state-of-the-art methods in terms of detail preservation, low-frequency noise removal, and temporal stability on a variety of production and academic datasets.

Disney 

You can get more details by following this link (full paper) or attending this year’s SIGGRAPH. 

Leave a Reply

avatar
Related articles