Soft 3D Reconstruction for View Synthesis
Subscribe:  iCal  |  Google Calendar
7, Mar — 12, Jun
San Francisco US   19, Mar — 24, Mar
Anaheim US   23, Mar — 26, Mar
San Jose US   26, Mar — 30, Mar
Washington US   30, Mar — 2, Apr
Latest comments
by Jeff
2 hours ago

I just based my landscape material on this. I just wish I could exactly figure out what is going on with normals, ao and displacement here.

by Christopher Buller
4 hours ago

That was extremely helpful! Thank you!

Well, small/medium intuos pro is way cheaper that iPad Pro + pencil... just saying... And it works better with ZBrush...

Soft 3D Reconstruction for View Synthesis
11 October, 2017

Eric Penner and Li Zhang from Google revealed a novel algorithm for view synthesis that utilizes a soft 3D reconstruction to improve quality, continuity and robustness. The thing is that the new algorithm means more details and less artifacts. 

Our main contribution is the formulation of a soft 3D representation that preserves depth uncertainty through each stage of 3D reconstruction and rendering. We show that this representation is beneficial throughout the view synthesis pipeline. During view synthesis, it provides a soft model of scene geometry that provides continuity across synthesized views and robustness to depth uncertainty.

During 3D reconstruction, the same robust estimates of scene visibility can be applied iteratively to improve depth estimation around object edges. Our algorithm is based entirely on O(1) filters, making it conducive to acceleration and it works with structured or unstructured sets of input views. We compare with recent classical and learning-based algorithms on plenoptic lightfields, wide baseline captures, and lightfield videos produced from camera arrays.

Eric Penner and Li Zhang 

You can find more details on the project and read the paper here.  

Leave a Reply

Be the First to Comment!