NVIDIA's New AI For Creating Textured 3D Meshes From Text Prompts

The meshes come out fully textured and in great quality.

A team of researchers from NVIDIA has presented Magic3D, a brand-new text-to-3D tool capable of creating fully-textured 3D mesh models from text prompts. Utilizing low- and high-resolution diffusion priors to learn the 3D representation of the target content, the tool is able to synthesize the required high-quality/resolution 3D content, which can then be used in a variety of rendering engines, in less than 40 minutes.

Moreover, Magic3D brings a wide array of editing capabilities, which enable its users to fine-tune the diffusion models and modify parts of the text in the prompt to get an edited high-resolution 3D mesh.

"We utilize a two-stage coarse-to-fine optimization framework for fast and high-quality text-to-3D content creation," commented the team. "In the first stage, we obtain a coarse model using a low-resolution diffusion prior and accelerate this with a hash grid and sparse acceleration structure. In the second stage, we use a textured mesh model initialized from the coarse neural representation, allowing optimization with an efficient differentiable renderer interacting with a high-resolution latent diffusion model."

Learn more about Magic3D here. Also, don't forget to join our Reddit page and our Telegram channel, follow us on Instagram and Twitter, where we share breakdowns, the latest news, awesome artworks, and more. 

Join discussion

Comments 0

    You might also like

    We need your consent

    We use cookies on this website to make your browsing experience better. By using the site you agree to our use of cookies.Learn more