Turning Rudimentary Sketches Into Lifelike Portraits

A team of researchers from the Chinese Academy of Sciences and the City University of Hong Kong presented a local-to-global approach that allows generating lifelike human portraits using rudimentary sketches.

The new approach is based on learning plausible face sketches from real face sketch images and finding the main aspects that approximate the input sketch. Treating input sketches as soft constraints allowed the researchers to produce high-quality face images with increased plausibility.

The new approach is said to include three core modules - CE (Component Embedding), FM (Feature Mapping), and IS (Image Synthesis). The first one adopts an auto-encoder architecture and separately learns five feature descriptors ( left-eye, right-eye, nose, mouth, and remainder). The other two modules develop deep learning sub-network for conditional image generation.

The team states that according to qualitative and quantitative evaluations, the method produces visually more pleasing face images compared to similar approaches. They also noted that the new approach is easy to use even if you are not an artist. 

You can find the paper DeepFaceDrawing: Deep Generation of Face Images from Sketches prepared for SIGGRAPH 2020 here.

Join discussion

Comments 0

    You might also like

    We need your consent

    We use cookies on this website to make your browsing experience better. By using the site you agree to our use of cookies.Learn more