AI Performs Semantic Style Transfer
Subscribe:  iCal  |  Google Calendar
20, Jan — 16, Mar
21, Feb — 23, Feb
Barcelona ES   25, Feb — 1, Mar
Dubai AE   5, Mar — 7, Mar
7, Mar — 1, Jun
Latest comments
by Rathan kumar.Muthyam
9 hours ago

Thanks for sharing and detailed production breakdown

i thought there wouldnt be anything better than akeytsu for creating easy animations. im happy if i am proven wrong.

Keith, I just wanted to stop by and say: Thank you.

AI Performs Semantic Style Transfer
9 August, 2017

Beautiful minds from Microsoft Research and Shanghai Jiao Tong University are sharing new image-modification algorithm, which will take AI-image processing even further.

Most recently we’ve seen a lot of very interesting developments in the AI-powered image production technology. At NVIDIA booth at SIGGRAPH 2017 you could find some very powerful real-time tech, that quickly allows to build some cracks in the material (Artomatix) or apply some very fun filters to the camera feed. The company does invest a lot into various AI-powered solutions, which might change the whole way we produce digital images. But NVIDIA is not the only one. Allegorithmic is also doing some VERY cool R&D in this field (check out the Substance Days 2017 video from Hollywood). It seems like Microsoft is doing some fun stuff with this approach as well.

Scientists from Microsoft Research and Shanghai Jiao Tong University have recently published a paper called ‘Visual Attribute Transfer through Deep Image Analogy’. Their new technique allows establishing ‘semantically-meaningful dense correspondences between two input images’. It’s sort of like Prisma, only much more versatile, and unlike Prisma, this algorithm can actually be downloaded and used by anyone interested.

Here’s how it works. The program takes two images. Image 1 is a photograph. Image 2 is a desired style (a painting by the painter initially). The output of this mixing project is basically a photo which is transformed into the painting. There is a variety of style transfer possibilities here. So far nothing new though. However, the key feature in this new method is the creation of those ‘semantically meaningful results for style transfer’. It allows working with two image pairs that may look completely different visually, but have some semantic components that are similar.

For example, in this case, the algorithm notices that the images both have a nose.

This gives it a number of very interesting applications.

First one is the most evident – regular photo to stylized image.

Second is swapping the style of two input images.

Third – style to photo. The results are pretty interesting, but a bit grotesque.

Forth – color transfer. This allows to build some amazing stuff from one photo.

If you want to learn more about this research, check out the original paper and download the code at GitHub.


Leave a Reply

Be the First to Comment!