logo80lv
Articlesclick_arrow
Research
Talentsclick_arrow
Events
Workshops
Aboutclick_arrow
profile_loginLogIn

NVIDIA Reveals GauGAN 2 with Text-to-Image Feature

Create photorealistic images with just a couple of words.

NVIDIA has presented GauGAN 2, a new updated version of their wildly popular GauGAN AI. The main feature of GauGAN 2 is its ability to turn a simple written phrase, or sentence, into a photorealistic image using deep learning. All you need to do is to type a phrase and AI generates the scene in real-time. You can also add additional adjectives and the model, based on generative adversarial networks, instantly modifies the picture.

You can see some examples in the video below:

What's more, AI can be utilized to depict otherworldly landscapes. "Imagine, for instance, recreating a landscape from the iconic planet of Tatooine in the Star Wars franchise, which has two suns. All that’s needed is the text “desert hills sun” to create a starting point, after which users can quickly sketch in a second sun," NVIDIA gives an example.

"The AI model behind GauGAN2 was trained on 10 million high-quality landscape images using the NVIDIA Selene supercomputer, an NVIDIA DGX SuperPOD system that’s among the world’s 10 most powerful supercomputers. The researchers used a neural network that learns the connection between words and the visuals they correspond to like “winter,” “foggy” or “rainbow," comments the team. "Compared to state-of-the-art models specifically for text-to-image or segmentation map-to-image applications, the neural network behind GauGAN2 produces a greater variety and higher quality of images."

You can learn more here. Also, don't forget to join our new Reddit pageour new Telegram channel, follow us on Instagram and Twitter, where we are sharing breakdowns, the latest news, awesome artworks, and more.

Join discussion

Comments 0

    You might also like

    We need your consent

    We use cookies on this website to make your browsing experience better. By using the site you agree to our use of cookies.Learn more