SceneTex generates textures accurately.
By now, we've seen plenty of AI tools that can generate 3D models. However, empty meshes are not enough for a good-looking scene. So meet SceneTex – a method that generates high-quality textures for 3D indoor scenes based on text prompts.
If you have an indoor scene already, you can describe the style you want to see, and SceneTex will put some textures on top. What makes it interesting is not only the time you can save on texturing but also how well the textures fit the models.
As you can see from the comparisons in the video, it is easy for similar programs to mix images so that one object's texture would stretch to another. According to the creators of SceneTex, their solution doesn't have such a problem, or at least it's less prominent than in competitors' apps.
So how does it work? The target mesh is first projected to a viewpoint via a rasterizer an RGB image with the proposed multiresolution texture field module is rendered. Each rasterized UV coordinate is taken as input to sample the UV embeddings from a multiresoultion texture.
After that, the UV embeddings are mapped to an RGB image via a cross-attention texture decoder. Finally, the Variational Score Distillation loss is computed from the latent feature to update the texture field.
If you need a mesh for your experiments, check out these AI generators that make 3D models:
Image credit: ByteDance
Image credit: Jingxiang Sun et al.
Image credit: Yawar Siddiqui et al.
Also, join our 80 Level Talent platform and our Telegram channel, follow us on Instagram, Twitter, and LinkedIn, where we share breakdowns, the latest news, awesome artworks, and more.