logo80lv
Articlesclick_arrow
Research
Talentsclick_arrow
Events
Workshops
Aboutclick_arrow
profile_loginLogIn

NVIDIA's New Neural Network for Turning 2D Images into 3D Objects

The network is capable of generating 2D photos of human and cat faces and turning them into 3D objects.

A team of researchers from NVIDIA and Stanford University unveiled EG3D, a new hybrid explicit-implicit network architecture that can generate high-resolution multi-view-consistent 2D images of human and cat faces in real-time and give generated images high-quality 3D geometry.

According to the team, their framework is able to leverage state-of-the-art 2D CNN generators, such as StyleGAN2, and inherit their efficiency and expressiveness by decoupling feature generation and neural rendering. The goal of the project was to improve the computational efficiency and image quality of 3D GANs without overly relying on approximations that affect multi-view consistency and shape quality

Click here to learn more about the network and access its code.

Join discussion

Comments 0

    You might also like

    We need your consent

    We use cookies on this website to make your browsing experience better. By using the site you agree to our use of cookies.Learn more