Scalable Large Scene Neural View Synthesis Block-NeRF Presented

Waymo demonstrated how representing large environments with Block-NeRF works.

In case you missed it

Read more about NeRF

Waymo, an autonomous driving technology company, presented Block-NeRF – a variant of Neural Radiance Fields that can represent large-scale environments.

The company says that when scaling NeRF to render large city-scale scenes, it is important to decompose the scene into individually trained NeRFs. 

"This decomposition decouples rendering time from scene size, enables rendering to scale to arbitrarily large environments, and allows per-block updates of the environment," Weymo says.

The company says it adopted several architectural changes to make NeRF better perceive the data captured over months under different environmental conditions. It also added appearance embeddings, learned pose refinement, and controllable exposure to each individual NeRF, and introduced a procedure for aligning appearance between adjacent NeRFs so that they can be seamlessly combined.

"We build a grid of Block-NeRFs from 2.8 million images to create the largest neural scene representation to date, capable of rendering an entire neighborhood of San Francisco."

Seems like the further development of the technology might make it possible to move around the world without leaving the house.

Check out Waymo's research on its website and don't forget to join our new Reddit pageour new Telegram channel, follow us on Instagram and Twitter, where we are sharing breakdowns, the latest news, awesome artworks, and more.

Join discussion

Comments 0

    You might also like

    We need your consent

    We use cookies on this website to make your browsing experience better. By using the site you agree to our use of cookies.Learn more