NVIDIA's New Neural Network for Image Editing

Meet EditGAN, a neural network that allows to edit images by applying pre-learned editing vectors.

Huan Ling, Karsten Kreis, Daiqing Li, Seung Wook Kim, Antonio Torralba, and Sanja Fidler from NVIDIA have presented EditGAN a new neural network that allows to edit images. It works by building a GAN framework that jointly models images and their semantic segmentation, which can be then modified by user input. The editing itself is performed by applying previously learned editing vectors and manipulating images at interactive rates.

According to the team, EditGAN is the first GAN-driven image editing framework, which simultaneously offers very high-precision editing, requires only very little annotated training data (and does not rely on external classifiers), can be run interactively in real-time, allows for straightforward compositionality of multiple edits, and works on real embedded, GAN-generated, and even out-of-domain images.

"Generative adversarial networks have recently found applications in image editing. However, most GAN-based image editing methods often require large-scale datasets with semantic segmentation annotations for training, only provide high-level control, or merely interpolate between different images," comments the team. "Here, we propose EditGAN, a novel method for high-quality, high-precision semantic image editing, allowing users to edit images by modifying their highly detailed part segmentation masks, e.g., drawing a new mask for the headlight of a car. EditGAN builds on a GAN framework that jointly models images and their semantic segmentations, requiring only a handful of labeled examples – making it a scalable tool for editing."

You can learn more about EditGAN here. Also, don't forget to join our new Reddit pageour new Telegram channel, follow us on Instagram and Twitter, where we are sharing breakdowns, the latest news, awesome artworks, and more.

Join discussion

Comments 0

    You might also like

    We need your consent

    We use cookies on this website to make your browsing experience better. By using the site you agree to our use of cookies.Learn more