The model can create different versions of endless landscapes.
Researchers presented SceneDreamer – a generative model that can generate unbounded 3D scenes from 2D image collections. It synthesizes diverse landscapes across different styles, with 3D consistency, depth, and free camera trajectory.
According to the creators, the model can make endless scenes in many cariations.
SceneDreamer's learning paradigm comprises a 3D scene representation, a generative scene parameterization, and a renderer that can leverage the knowledge from 2D images.
"Our framework starts from an efficient bird's-eye-view (BEV) representation generated from simplex noise, which consists of a height field and a semantic field. The height field represents the surface elevation of 3D scenes, while the semantic field provides detailed scene semantics."
The researchers also proposed a generative neural hash grid to parameterize the latent space given 3D positions and the scene semantics to encode generalizable features across scenes and align content.
You can read more about the model here. Also, don't forget to join our 80 Level Talent platform, our Reddit page, and our Telegram channel, follow us on Instagram and Twitter, where we share breakdowns, the latest news, awesome artworks, and more.