This new NeRF is based on textured polygons and can efficiently synthesize images with standard rendering pipelines on a wide range of devices.
This approach enables the network to be rendered with the traditional polygon rasterization pipeline, providing massive pixel-level parallelism and achieving interactive frame rates on a wide range of computing platforms, including mobile phones.
"We represent the scene as a triangle mesh textured by deep features," commented the team on the network's rendering pipeline. "We first rasterize the mesh to a deferred rendering buffer. For each visible fragment, we execute a neural deferred shader that converts the feature and view direction to the corresponding output pixel color."
To train the network, the team initializes the mesh as a regular grid and uses MLPs to represent the features and opacity for any point on the mesh. In a later training stage, binary alpha values are enforced with super-sampling on features for anti-aliasing. Then, they extract the triangle mesh and bake the features and opacity into texture images.
You can learn more and try out the MobileNeRF here.