Bringing Pictures to Life in EbSynth

Šárka Sochorová, Co-Founder of Secret Weapons, which has created EbSynth – a tool that brings pictures to life – told us about the idea behind the tool, explained how it works, and talked about the future of generative art.

In case you missed it

You may find these articles interesting

Introduction

Hi, I'm Šárka Sochorová, and together with my Co-Founder Ondřej Jamriška we started Secret Weapons, a software company that makes powerful tools for visual artists. We met at the Czech Technical University in Prague, where Ondřej worked as a computer graphics researcher, and I joined their team to do my Ph.D. I had a background in Computer Science and Traditional Animation, so Ondřej's research was fascinating to me because it was the perfect merge of art and technology. He co-authored several papers on example-based synthesis (LazyFluids, StyLit, FaceStyle), which is a style-transfer method where you provide an example of the look you want to achieve, and the algorithm applies it on a 3D model, CG simulation, or photograph.

Soon after I joined, we started experimenting with transferring the style from paintings to arbitrary videos. That was the birth of EbSynth and the first project we worked on together. Recently, we published our latest research on practical pigment mixing for digital painting. Surprisingly, the majority of painting programs don’t mix colors as real paints even though the mathematical model for simulating it is well known. To make realistic pigment mixing more practical, we created Mixbox. It hides all the physics inside an RGB-in RGB-out box, making it easy to implement into any painting software.

The Origins of Secret Weapons and EbSynth

Doing graphics research is really exciting. But after years of doing SIGGRAPH papers, Ondřej really wanted to start getting those techniques into the hands of artists. I loved the idea because I never saw myself as an academic anyway. We both enjoyed doing research, but we really wanted to turn it into something practical that regular people could actually use. We considered different options and decided that starting our own company is the best way to achieve this. Among EbSynth, we had a list of other really cool tools that we wanted to make. We thought that these tools could be the secret weapons of animators and VFX artists. That's why we chose the name ‘Secret Weapons’.

The original inspiration for EbSynth came from Jakub Javora. He is an amazing concept artist and he kept looking for ways to bring his paintings to life with some movement. So, Ondřej made him a tool to stick a painting onto a video and make it move. It was kind of primitive and the results were crude, but the idea was there. A few years later we were approached by a production company that looked for help with a rotoscoped movie they were trying to get off the ground. This made us revive the tool and enhance it with some synthesis technology we worked on in the meantime. It kept getting better and better, and at some point, we wanted to share it with the world.

The Tech Behind It

Although many people think there is AI involved, there really isn't. The tech behind EbSynth is 'Example-Based Synthesis', that's where the name EbSynth comes from. You provide a video and a painted keyframe – an example of your style. EbSynth breaks your painting into many tiny pieces, like a jigsaw puzzle. It then uses those pieces to assemble (synthesize) all the remaining video frames. 

That way, EbSynth always works only with what the artist paints. It never introduces anything new that wasn't shown in the painted keyframe. This means that the artist actually has control over the stylized output. By modifying even small details in the keyframe painting, they can alter the visual style of the final animation. 

How It Works

EbSynth should be pretty easy to use, but there are a few things to understand if you want really nice results. Let me summarize the workflow. First, you need a video. It can be live-action footage, a hand-drawn animation, or a 3D render. You then pick one keyframe and paint over it or edit it as you like. The video and the painted keyframe are your inputs. You feed them to EbSynth, and it distributes the style of your keyframe to all the remaining frames. 

Of course, it's almost never this easy, so let's get into more details.  

a) Keyframe choice
The keyframe should reveal as much of the scene as possible because when you paint over it, you give EbSynth examples of how things should look. So, suppose I have a shot where a person suddenly smiles. In that case, the keyframe should contain that person's open eyes and an open mouth with teeth if they are visible at any point. You show EbSynth how you want the eyes and teeth to look. With that information, it can create both: the frames where the eyes and mouth are closed and when they are open.

b) Match the painting with the video frame
Make sure that your painting matches the video frame as closely as possible. If a person on your video isn't wearing a hat and you paint it in the keyframe, you'll get some ugly artifacts. The animation is guided by the video. If you want a person with a hat in your painted result, make the actor wear one in the video. Use props – it's fun! 

c) More Keyframes
Sometimes, one keyframe is enough. But you usually need more, and they need to be consistent. To achieve that, always start with just one keyframe. Feed it to EbSynth and get your first animated sequence out. Usually, the frames close to the keyframe are fine, but the further you get, the more broken they look. That's a good location for a new keyframe but don't paint it from scratch. Use the broken result of the first EbSynth run and fix it. Sometimes it needs just a couple of brushstrokes. 

Creating new keyframes by fixing the broken frames from the previous synthesis runs makes sure that your keyframes are consistent. This is super important for further blending.

d) Blending multiple keyframes
Once you have a new keyframe, feed it to EbSynth and alter the output sequence frame range. Let me give you an example. Suppose my video sequence runs from frames 0 to 100, and my first keyframe is frame number 50. I would run the first synthesis over the whole video – from 0 to 100. Let’s say that the result gets really messy at frame number 25. So, I fix it with a couple of brushstrokes and turn it into my second keyframe. This time, I only run the synthesis from frames 0 to 50. My third keyframe is frame number 75, so I run the last synthesis from 50 to 100. Now, I have three stylized image sequences. All I need to do to get the final result is blend them together.

EbSynth does not perform the blending by itself, but it can export your project to After Effects and prepare the initial cross-fades for you. You can then edit them if you want or just render the final video. If you don't have After Effects, just use another software to perform simple cross-fading.

e) Use layers
The last and maybe the most important tip is to work with layers. EbSynth can handle the alpha channel, and it will give you results on a transparent background. That way, you can separate your foreground figures from the background and work on them individually. 

This will immediately give you cleaner edges in your result. By dividing your scene into more layers, you get more control over details. For example, you can have one keyframe for the body and three keyframes for the head, if needed. Just paint the head on a transparent background and feed it to EbSynth. It will give you an animated face on a transparent background that you can later compose with the rest of the body. Using layers leads to a bigger compositing and blending project, but you can achieve wonderful results.

Changes and Challenges

During the development, there were many technical challenges: detail preservation, temporal coherence, speed, etc. But these were expected issues, and especially Ondřej was really great at solving them. However, neither of us has any experience in marketing. So, after releasing the Alpha version, it was pretty challenging to let people know EbSynth exists. We wanted to get it to artists and see if they find it useful or not. Based on the reaction, we were either going to keep working on it or drop it. That's why we are so proud of all the artists who embraced EbSynth and included it in their workflows. There are so many exciting and wonderful animations made with EbSynth on the internet today. That makes us truly happy. It motivates us to keep going and gives us the reason to finish EbSynth 1.0.

Joel Haver is particularly special to us because he made EbSynth’s animation a part of his handwriting and that’s just amazing, he’s a legend.

We haven’t introduced that many changes yet. The Alpha version was the first prototype, just to see if people find it interesting and see some value in it or not. When we noticed that artists started using it, we released an improved Beta version. It is a lot faster, the interface is a bit more user-friendly, and we added direct export to After Effects to make the blending easier. The most significant change will come with the release of EbSynth 1.0, which will have a completely new and interactive user interface to make the workflow smooth and efficient.

Competition

We are definitely keeping a close eye on similar tools. Rather than being scared that EbSynth might get replaced, we’re focusing on making a useful and fun-to-use tool. We believe that if we do it right, we can earn a spot in the industry. The scenario has been pretty clear to us from the beginning. We want to make EbSynth available to everyone: individual artists, small studios, and also big studios, who usually have specific workflows and need some customization. So, we knew that we didn’t want to sell just the technology. We wanted to finish the product ourselves and offer it to the users. And that’s still our goal.

Plans

Our goal is to make EbSynth interactive and easy to use. So we're currently focusing on speed and user interface. The main challenge is to improve the performance to a point that enables real-time preview. This will allow artists to see the effects of their changes immediately without having to wait for the offline render to finish. Besides preview, the interface will also contain an interactive timeline to make the process of adding, editing, and blending keyframes fast and smooth. Essentially, we want artists to focus purely on the creative side of the process and not get slowed down by complicated workflows. We're aiming for the release in Q3 2022.

The Future of Generative Art

It looks like there is a bit of a bubble around AI today. Anything with the AI or ML label seems sensational, including generative art. So it probably has some future, because people like to discuss it, researchers want to improve it, and artists like to play with it. But I'm not concerned that generative art would actually replace real artists. Aaron Hertzmann wrote a very nice, in-depth essay “Computers do not Make Art, People Do” on this topic.

We noticed style transfer tools based on AI such as Artbreeder or Toonify!

Toonify – sometimes it works, sometimes it doesn’t. Take it or leave it.

Currently, the results from tools like these are somewhat unpredictable and hard to control, which limits their utility for production use. However, these tools can be a good source of fun and inspiration, which is positive.

Šárka Sochorová, Co-Founder of Secret Weapons

Interview conducted by Arti Sergeev

Join discussion

Comments 0

    You might also like

    We need your consent

    We use cookies on this website to make your browsing experience better. By using the site you agree to our use of cookies.Learn more