The approach also supports style transfer.
Check out a powerful neural network that renders 3D scenes from sets of 2D images, letting you explore spaces as 3D environments. The process is straightforward: you just prepare a set of images and then upload it for the model to study. Then, using a set of points, it chooses the right angle, adds depth, and deals with missing details without producing too many artifacts.
The system works with complex objects and can even recreate vegetation or intricate objects like railings on a staircase, or ornaments.
"We introduce a general approach that is initialized with MVS, but allows further optimization of scene properties in the space of input views, including depth and reprojected features, resulting in improved novel-view synthesis," states the abstract. "A key element of our approach is our new differentiable point-based pipeline, based on bi-directional Elliptical Weighted Average splatting, a probabilistic depth test and effective camera selection. We use these elements together in our neural renderer, which outperforms all previous methods both in quality and speed in almost all scenes we tested. Our pipeline can be applied to multi-view harmonization and stylization in addition to novel-view synthesis."
What is more, the model supports style transfer so you can recreate paintings as 3D scenes or apply a given style to a 3D environment. Where would you use this model? How can the neural network be used in games? Personally, I would love to recreate a couple of scenes from my favorite TV shows and then assemble them as a VR app.
You can find the paper and the files here. Don't forget to join our new Reddit page, our new Telegram channel, follow us on Instagram and Twitter, where we are sharing breakdowns, the latest news, awesome artworks, and more.