If love it, make it free (c) ;)
Trying to steal Vray's thunder.
At SIGGRAPH 2015 we had a chat with the CEO (Dr. Gaspard Breton) and the head of R&D (Nicolas Stoiber)at Dynamixyz, about Dynamixyz’s award-winning video-based facial capture and analysis solution. It is accessible for game creators, VFX pros, and indie devs. They talked about the implementation of real-time facial animation in theatre plays, movies, and eventually keynote conferences.
Our company is based in France and we specialize in facial motion capture and motion retargeting. The idea is to provide game creators and VFX professionals with simple to use tools and integrate it into one software to produce animation from the motions of the performing actors. It is accessible to non-professionals as well.
For years in the industry, people have used those technologies. If you think of big studios like ILM or Weta Digital, they have in-house pipelines that were built over the years by engineers and research scientists to do these things such as capturing the movements of faces (sometimes markerless), trying to figure out how your face moves, how it behaves, where your eyebrows go on specific shots, and translating that into motion to animate characters, creatures, robots, etc.
What we try to do at Dynamixyz is replicate that pipeline and make it as a software that people could buy, download, and use to have access to the same tools that the big studios have been using, except we do it for cheaper and more intuitively.
The company is 5 years old and we’ve been at SIGGRAPH for 4 years so we kind of think of it as our home. This is where we connect to our primary target as a market. Our main market is game development studios, especially the big ones that have to produce large volumes of facial animation for their games. This is obviously a trend that’s been increasing over the years. The games get bigger, you have open worlds, you have a lot of NPC characters, and to make them believable you need to increase facial animation volume.
These kind of tools are thought to produce large volumes of facial animation without the pain. So SIGGRAPH is where we meet the mid-size to large-scale studios, and present the software and latest improvements. We’re an R&D company so we try to improve the product year after year and SIGGRAPH is the best opportunity to show the new features that have been integrated into the software. SIGGRAPH is where you get acquainted with the cutting-edge in technology, science, and research.
Initially, the target was mid to large-scale studios, but it turned out indie developers wanted to use higher quality facial animations as well. This is important considering a lot of indie developers tend to do a lot of things themselves. They might not be specialists in facial animation, but they still need to do it anyway. In that regard, our tool is pretty handy because you don’t need to be an expert in facial animation to use it.
For very small projects and very small studios, you know you’re going to be using a tool for facial animation and you don’t want to pay for a fully fledged software and full license because it’s way too expensive and you’re only going to be using it for a few weeks to a few months. They tend to produce short sequences and very short volumes of animation. So we have different licensing options to accommodate that.
There is a rental option where you can pay by the month, and then a pay-as-you-go option where you pay for the number of seconds you have produced with the software.
Future of Facial Animation
Facial animation is very exciting right now, it used to be a pure research problem and now it’s really something that the industry is adopting massively. If you think of games, you wouldn’t consider producing a game without any type of facial animation in them.
I think something really exciting that is coming up in the future is real-time. By real-time, I mean applications like theatre plays where you would merge live acting with virtual characters that would play among the actors in real-time, this includes TV shows as well. We’ve been involved in a few projects. We’re also thinking of conference keynotes that can use virtual avatars, this is something we’re working on currently.