A group of scientists has introduced a new GAN that can turn your sketches into realistic pictures.
Sheng-Yu Wang, David Bau, and Jun-Yan Zhu have shown a new generative adversarial network that can turn a rough sketch of a cat or a horse into a sample that looks like a real photo. Their method takes in one or a few hand-drawn sketches and customizes an off-the-shelf GAN to match the input sketch. The same noise Z is used for the pre-trained and customized models. While their new model changes an object’s shape and pose, other visual cues such as color, texture, background, are faithfully preserved after the modification.
"In this work, we present a method, GAN Sketching, for rewriting GANs with one or more sketches, to make GANs training easier for novice users. In particular, we change the weights of an original GAN model according to user sketches. We encourage the model’s output to match the user sketches through a cross-domain adversarial loss," comments the team. "Furthermore, we explore different regularization methods to preserve the original model's diversity and image quality. Experiments have shown that our method can mold GANs to match shapes and poses specified by sketches while maintaining realism and diversity."
Some examples created by this GAN:
You can learn more here. Also, don't forget to join our new Reddit page, our new Telegram channel, our Discord, follow us on Instagram and Twitter, where we are sharing breakdowns, the latest news, awesome artworks, and more.