Google's New Neural Network for Fast Rendering on Mobile Phones

This new NeRF is based on textured polygons and can efficiently synthesize images with standard rendering pipelines on a wide range of devices.

A team of scientists from Google Research has presented MobileNeRF, a new neural network based on textured polygons that can efficiently synthesize images with standard rendering pipelines. According to the team, their NeRF is represented as a set of polygons with textures representing binary opacities and feature vectors.

This approach enables the network to be rendered with the traditional polygon rasterization pipeline, providing massive pixel-level parallelism and achieving interactive frame rates on a wide range of computing platforms, including mobile phones.

"We represent the scene as a triangle mesh textured by deep features," commented the team on the network's rendering pipeline. "We first rasterize the mesh to a deferred rendering buffer. For each visible fragment, we execute a neural deferred shader that converts the feature and view direction to the corresponding output pixel color."

To train the network, the team initializes the mesh as a regular grid and uses MLPs to represent the features and opacity for any point on the mesh. In a later training stage, binary alpha values are enforced with super-sampling on features for anti-aliasing. Then, they extract the triangle mesh and bake the features and opacity into texture images.

You can learn more and try out the MobileNeRF here.

Join discussion

Comments 0

    You might also like

    We need your consent

    We use cookies on this website to make your browsing experience better. By using the site you agree to our use of cookies.Learn more