logo80lv
Articlesclick_arrow
Research
Talentsclick_arrow
Events
Workshops
Aboutclick_arrow
profile_loginLogIn

StyleGAN Turns Pixar Characters Into Real Humans

Here's how your favorite characters would look like in real life.

Nathan Shipley has recently shared an awesome experiment with NVIDIA's StyleGAN and Pixel2Style2Pixel. He picked some images of Pixar characters and turned them into real human beings with the help of the generative adversarial network. 

For those of you unfamiliar with the project, StyleGAN is a generative adversarial network (GAN) developed by Nvidia researchers back in December 2018. The source was made available in February 2019. The platform uses Nvidia's CUDA software, GPUs, and TensorFlow.

Previously this year, the team revealed its latest experiment in machine learning for image creation called StyleGAN2 at CVPR 2020. The new version, based on the original StyleGAN, can generate a seemingly infinite number of portraits in an infinite variety of painting styles.

You can learn more here. Don't forget to join our new Telegram channel, our Discord, follow us on Instagram and Twitter, where we are sharing breakdowns, latest news, awesome artworks, and more.

Join discussion

Comments 0

    You might also like

    We need your consent

    We use cookies on this website to make your browsing experience better. By using the site you agree to our use of cookies.Learn more