A team of researchers from the Technical University of Munich and Snap Research has recently published a paper unveiling Text2Tex, a novel method for generating textures for 3D models based on the given text prompts.
According to the team, the technique incorporates inpainting into a pre-trained depth-aware image diffusion model to synthesize high-resolution partial textures from multiple viewpoints. Additionally, the researchers introduced an automatic view sequence generator to determine the next best view for updating the partial texture, to avoid artifacts in the generated textures.