A Neural Network for Turning Portraits into Cartoon & Anime-Style Images

A team of researchers proposed a new method that can synthesize artistic portraits in corresponding styles.

A team of researchers from Alibaba Group presented DCT-Net, a new image translation method for turning human portraits into stylized versions of themselves. The system requires around 100 exemplars to learn a particular art style, for example, 3D cartoon, sketch, anime, hand-drawn, etc., and is capable of producing high-quality style transfer results. What's more, the system enables full-body image translation via an evaluation network trained by partial observations.

According to the presentation, DCT-Net consists of three modules – a content adapter borrowing the powerful prior from source photos to calibrate the content distribution of target samples; a geometry expansion module using affine transformations to release spatially semantic constraints; and a texture translation module leveraging samples produced by the calibrated distribution to learn a fine-grained conversion.

"Given limited style exemplars, our method can synthesize artistic portraits in corresponding styles, excelling in content (e.g., identity and accessories) preservation, and handling complicated faces with heavy occlusions, makeup, or rare poses. Our method also enables full-body image translation by using only head observation for training samples," commented the team.

You can read the full report here and access the system's code over here.

Join discussion

Comments 0

    You might also like

    We need your consent

    We use cookies on this website to make your browsing experience better. By using the site you agree to our use of cookies.Learn more