The tool can train and generate a 3D object in seconds.
The NVIDIA Research team has unveiled a neat new tech that can turn several 2D images into a 3D scene called Instant NeRF. The technique uses Neural Radiance Fields, a special AI that can train to reconstruct a 3D scene from a handful of 2D images taken at different angles. According to NVIDIA, Instant NeRF is one of the fastest NeRFs in existence as it requires just seconds to train on a few dozen still photos and can then render the resulting 3D scene within tens of milliseconds.
The team adds that Instant NeRF could be used to create avatars or scenes for virtual worlds, to capture video conference participants and their environments in 3D, or to reconstruct scenes for 3D digital maps.
Collecting data for NeRF, adds NVIDIA, can be compared with a photographer trying to capture a celebrity’s outfit from every angle – the neural network requires a few dozen images taken from multiple positions around the scene, as well as the camera position of each of those shots.
The AI is also capable of filling in the blanks and reconstructing the scene by predicting the color of light radiating in any direction, from any point in 3D space. The technique can even work around occlusions – when objects seen in some images are blocked by obstructions in other ones.
"If traditional 3D representations like polygonal meshes are akin to vector images, NeRFs are like bitmap images: they densely capture the way light radiates from an object or within a scene," comments David Luebke, Vice President for graphics research at NVIDIA. "In that sense, Instant NeRF could be as important to 3D as digital cameras and JPEG compression have been to 2D photography – vastly increasing the speed, ease and reach of 3D capture and sharing."