Nvidia has recently presented an AI that with an unsupervised learning method for computers which can create mind-blowing fake videos. The system will allow users to set up weather, turn day into night, and change almost anything.
Previous techniques relied on massive amounts of data and has problems with training the machines to find their own patterns. Researched had a hard time dealing with mapping a low-resolution image to a corresponding high-resolution image and colorization (mapping a gray-scale image to a corresponding color image).
Unsupervised image-to-image translation aims at learning a joint distribution of images in different domains by using images from the marginal distributions in individual domains. Since there exists an infinite set of joint distributions that can arrive the given marginal distributions, one could infer nothing about the joint distribution from the marginal distributions without additional assumptions. To address the problem, we make a shared-latent space assumption and propose an unsupervised image-to-image translation framework based on Coupled GANs. We compare the proposed framework with competing approaches and present high quality image translation results on various challenging unsupervised image translation tasks, including street scene image translation, animal image translation, and face image translation. We also apply the proposed framework to domain adaptation and achieve state-of-the-art performance on benchmark datasets. Code and additional results are available in this https URL .
Ming-Yu Liu, Thomas Breuel, Jan Kautz
Read the paper
The modern machines can turn sunny days into rainy ones, create the equivalent of a “snow plow” filter for videos, and more.
Reality is now a strange thing thanks to projects of Nvidia. Should we be worried? Let’s discuss!