Soft 3D Reconstruction for View Synthesis
Events
Subscribe:  iCal  |  Google Calendar
NY 11222 US   18, Jun — 21, Jun
Utrecht NL   29, Jun — 30, Jun
Brighton GB   10, Jul — 13, Jul
Brighton GB   10, Jul — 13, Jul
Cambridge GB   13, Jul — 17, Jul
Latest comments
by Utsav singh
12 min ago

your shader complexity is low because you used true polygon models instead of just a masked plane to prevent alpha overdraw?

by Brad Johnson
20 min ago

I love the way you write this information because it helps a lot along with I shared this information with my friend and kept sharing such useful information for the further gain targeted audience to improve your bounce rate. Fantastic way to clean unwanted files from your mobile using cleaning master you can also search by "ITL Phone Cleaner- Antivirus & Speed Booster" name. https://play.google.com/store/apps/details?id=com.falcon.clean.phone.master.junk.cleaner.battery.booster

by product key
2 hours ago

Thanks for the information your article brings. I see the novelty of your writing, I will share it for everyone to read together. I look forward to reading many articles from you. windows 10 pro product key

Soft 3D Reconstruction for View Synthesis
11 October, 2017
News

Eric Penner and Li Zhang from Google revealed a novel algorithm for view synthesis that utilizes a soft 3D reconstruction to improve quality, continuity and robustness. The thing is that the new algorithm means more details and less artifacts. 

Our main contribution is the formulation of a soft 3D representation that preserves depth uncertainty through each stage of 3D reconstruction and rendering. We show that this representation is beneficial throughout the view synthesis pipeline. During view synthesis, it provides a soft model of scene geometry that provides continuity across synthesized views and robustness to depth uncertainty.

During 3D reconstruction, the same robust estimates of scene visibility can be applied iteratively to improve depth estimation around object edges. Our algorithm is based entirely on O(1) filters, making it conducive to acceleration and it works with structured or unstructured sets of input views. We compare with recent classical and learning-based algorithms on plenoptic lightfields, wide baseline captures, and lightfield videos produced from camera arrays.

Eric Penner and Li Zhang 

You can find more details on the project and read the paper here.  

Leave a Reply

Be the First to Comment!

avatar
wpDiscuz