Nvidia presented a new AI approach that allows generating fully-textured 3D models based on 2D images.
The paper will be presented at the annual Conference on Neural Information Processing Systems in Vancouver, British Columbia. They call their new neural network a differentiable interpolation-based renderer or DIB-R.
The team states that they trained their DIB-R neural network using multiple datasets including images previously turned into 3D assets, 3D models presented from multiple angles, and more. NVIDIA wrote that it takes about two days to train the neural network on how to extrapolate the extra dimensions, and then it can transform a 2D photo in less than 100 milliseconds.
The approach could potentially be used to improve how machines interpret the world, and understand objects around them. This also means that you can basically take still images from a live video stream and convert them to 3D models (or even take whole videos).