A New GAN-Based System for Interactive 3D Face Editing

This generative model enables local control of the facial shape and texture, as well as real-time, interactive editing.

A team of researchers from Tencent AI Lab, ByteDance, and Tsinghua University presented IDE-3D, a new generative model for free-view face drawing, editing, and style control. Powered by a GAN, the system enables local control of the facial shape and texture, as well as real-time, interactive editing, providing a comprehensive toolset for creating realistic and customizable digital portraits.

According to the team, the system consists of three major components – a 3D-semantics-aware generative model that produces view-consistent, disentangled face images and semantic masks, a hybrid GAN inversion approach that initialize the latent codes from the semantic and texture encoder, and a canonical editor that enables efficient manipulation of semantic masks in canonical view and produces high-quality editing results.

You can learn more about the system here. Also, don't forget to join our Reddit page and our Telegram channel, follow us on Instagram and Twitter, where we share breakdowns, the latest news, awesome artworks, and more. 

Join discussion

Comments 0

    You might also like

    We need your consent

    We use cookies on this website to make your browsing experience better. By using the site you agree to our use of cookies.Learn more