logo80lv
Articlesclick_arrow
Research
Talentsclick_arrow
Events
Workshops
Aboutclick_arrow
profile_login
Log in

Google's New AI for Generating 3D Flythroughs from 2D Images

The model was trained only on still images and utilizes generated Depth Maps to produce high-quality results.

During the ECCV 2022 conference, UC Berkeley's Angjoo Kanazawa and Google Research's Zhengqi Li, Qianqian Wang, and Noah Snavely shared a look at the amazing new artificial intelligence capable of turning 2D images into gorgeous 3D flythroughs. Meet InfiniteNature-Zero, the model that can produce high-resolution and high-quality flythroughs starting from a single seed image, using a system trained only on still photographs.

According to the team, the system works by generating a Depth Map using single-image depth prediction methods, using the Depth Map to render the image forward to a new camera viewpoint, resulting in a new image and Depth Map from that new viewpoint, and utilizing an image refinement network that takes this low-quality intermediate image and outputs a complete, high-quality image and corresponding Depth Map. The whole process can then be repeated with this synthesized image as the new starting point, resulting in a high-quality video.

"Because we refine both the image and the depth map, this process can be iterated as many times as desired – the system automatically learns to generate new scenery, like mountains, islands, and oceans, as the camera moves further into the scene," comments the team.

You can learn more about the AI here. Also, don't forget to join our Reddit page and our Telegram channel, follow us on Instagram and Twitter, where we share breakdowns, the latest news, awesome artworks, and more. 

Join discussion

Comments 0

    You might also like

    We need your consent

    We use cookies on this website to make your browsing experience better. By using the site you agree to our use of cookies.Learn more