At SIGGRAPH we met with Kim Libreri (CTO of Epic Games) and Alasdair Coull (Head of Research and Development at Weta Digital) and we learned about how Epic Games teamed up with Weta Digital to create the virtual reality experience – “Thief in the Night” – along with the challenges that came with the creating it.
At SIGGRAPH we met with Kim Libreri (CTO of Epic Games) and Alasdair Coull (Head of Research and Development at Weta Digital) and we learned about how Epic Games teamed up with Weta Digital to create the virtual reality experience – “Thief in the Night” – along with the challenges that came with creating it. They also elaborated on the difference between film and VR, lessons to be taken from the video game industry, and the evolution of VR and the direction it is heading.
The Epic Games and Weta Digital Collaboration
Kim Libreri: In November last year Jay Wilbur, Head of Business Development at Epic Games, suggested that we take a trip to the Southern Hemisphere to see what customers are doing out there and see if there are any interesting things we can do. He also asked me if I knew anybody at Weta Digital. I used to work in the movie business. Prior to Epic Games, I was at Lucasfilm for 10 years, and before that I was responsible for all three of the Matrix movies.
So I pinged a couple of people and we managed to sort out a visit to Weta Digital and have a tour of the workshop, just a casual visit to see if they would be interested in real-time computer graphics and UE4, and also just to see this famous place that made all these famous movies. We ended up meeting with Alasdair there and it turns out they had already been playing with our engine.
Alasdair Coull: We’d been exploring different technologies and when Epic Games came down, we thought it was a good time to try to put a demo together and see how well our assets and content held up in UE4.
We just finished The Hobbit: The Battle of the Five Armies and we took some Lake-Town assets from mocap stage, brought them into Unreal Engine quickly over a couple of weeks and brought in Smaug the Dragon. It was really rough and simple, but it worked out pretty well.
Kim Libreri: When people are shooting on a motion capture stage, you see people in ping pong ball suits that look abstract. Weta Digital has a system that allows you to visualize what the movie is going to look like as you’re shooting that virtual construct.
Alasdair Coull: That experiment was to see whether we could take our real-time content, which is quite separate from games, and bring it into Unreal and see if we could merge the two worlds.
Kim Libreri: We asked Alasdair and the team, Alasdair asked Peter Jackson, and we had 9 weeks to finish the “Thief in the Shadows” Smaug Demo for GDC.
Alasdair Coull: Our two teams worked pretty well, we talked everyday.
Kim Libreri: The hardest thing about the process was actually video conferencing between New Zealand and North Carolina. For some bizarre reason, the bandwidth wasn’t particularly great. That really was the hardest thing we had to deal with [laughs].
Smaug: Film Versus VR
the-hobbit-the-desolation-of-smaug-1-80lv
Alasdair Coull: The film version of Smaug was about 850,000 polys so that was going to be too heavy to run in real-time. We brought it right down and they said that we can bring the count up much more because Unreal Engine handles the polycount really well.
Then we ended up with 60,000 polys in the face and 35,000 for the body. For Smaug’s texture, he’s got thousands and thousands of textures for the film assets so we had to bake those down. We ended up with about 6 layers for the Smaug asset.
Kim Libreri: With movies, when you make texture maps, you’re not really constrained to the amount you can have and the resolution. A single frame in a movie shot can take 10 hours to render on a multicore machine. It’s like 11 milliseconds for 2 stereo images that are higher than HD. So there’s a lot of optimizations you have to do to get it into real-time.
Goal of the Demo
smaug-2-80lv
Alasdair Coull: Something that was visually wonderful was the goal. We wanted it to be a believable character, like Smaug in the film. We wanted people to feel as if Smaug was talking directly to them in the VR experience.
Kim Libreri: It was great to show that a movie company can make this move into real-time. What we did with our demo, is a call to action for all other studios making VR content and game engine content, that the future is really super positive. So many different types of people use Unreal Engine, not only professionals but hobbyists use it and we wanted to show what is possible in the future.
Animation
Alasdair Coull: The animation was an interesting challenge because for the film, Smaug is a full muscle and skin simulation rig. We actually have all the bones, muscles, and skin and they slide all over each other. We wanted to recapture that in real-time so we worked with Epic Games and built some technology to do traditional joint skinning, and then we added 200 extra joints in which we then mapped back onto that final simulation so you’d get a pretty good representation of what the bake would be.
Kim Libreri: If you think about it, when you move around there’s a lot of complicated things happening. Your bones are moving, but then the muscles are connected to the bones and they move and they are activating and pulling things, and the fat moves, and the skin slides. They simulate all that for the movie. That’s why it looks so real, because it’s actually simulating a lot of things that biomechanically happen in a character. We can’t come up with something like that in real-time so they came up with a very clever compression technique to take that super high resolution data, pre-simulate and bring it into the engine.
Alasdair Coull: We ran the dragon through the same animation and simulation process as if it would be in the film and we took the data and changed the representation to make it run in real-time.
Difference Between Film and VR
Kim Libreri: One of the big differences between film and VR is that with VR, you don’t know where the audience is going to look. The most you can do is attract their attention with a bright light or a sound. In a movie, the audience is just looking at a screen in front of them. So when we’re telling a story, you can move the camera around. If we’re recreating this interview situation right now for a movie, if you mess up your line or I mess up my line on one take, when we use it we use different moments in time because the actor repeats themselves or we’ll move a camera for best composition or we’ll make the animation more dramatic by placing you in a scene that looks scarier, but if you’re actually to look at it in linear time as an observer, you would see the dragon jumping around and that doesn’t make sense because movies are an illusion. We’re just trying to educate the audience of where to look.
With VR we don’t have these luxuries anymore. You can’t compress time, you can’t jump forward or jump back, it has to be at the time that you, as a human being, experience that virtual reality. Instead of it being a sequence of small performances (shots) like in the movies, it’s just one continuous big piece of animation. That also causes strain on the simulation system. Normally it would be a few hours per simulation, but then it ends up being a few days because it’s the biggest performance Smaug has ever done.
Alasdair Coull: In The Hobbit, the treasure hall in the film, there is not just one treasure hall. Those columns move around. If Smaug needs to rear back, for direct effect we might move the columns 20 feet back just to make it look a little better, or we’ll change the lighting for that particular pose.
In VR you can’t cheat like that. Smaug would jump hundreds of meters between every line of dialog and so we had to reanimate the whole thing so he can walk around you in this nice way. However, in the film you’ve got cameras that can pull hundreds of meters back. In VR, when he rears up and he gets high, we can move the ceiling a little bit whereas in the movie we would scale it.
Evolution of VR
Kim Libreri: Now that technologies moved on, we’ve got controllers, you’d want to be able to grab a coin and throw it at the dragon and it hopefully goes the way you want it to do so you can hide behind something in the VR experience. VR now is evolving so you’ll be able to do all these kinds of things. I also think voice recognition is going to be a big part of VR and eventually, they have to crack that. I think that’s an important pillar for the future of VR.
VR is a really great forum for telling stories, but it’s really different. A lot of Hollywood companies are getting into VR and it’s exciting stuff but they’re all learning it. What’s boring in a movie isn’t necessarily boring in VR, and what’s boring in VR can be really exciting in film. So there are a lot of rules.
People can see that if you can produce a world that is plausible for VR the possibilities from an experience perspective is so different from things that are preceded. I think this year is the year that people look back on VR and can say it’s a feasible medium.
Lessons of the Video Game Industry
Kim Libreri: I think there’s a lot to learn from the games industry. These super high techniques for authoring characters with muscle simulations and car simulations, I think there’s something there in terms of pre-authoring the content and being able to play it back in real-time.
Obviously our Smaug in the demo is just doing one performance, but there’s no reason that you couldn’t have a version of him that behaves more like a game character, where you can make him run in a particular direction and takeoff and fly by pre-authoring some of the shapes that are needed to produce the body deformations that look realistic. In the future I’d love for a chance to create a dynamic character that totally interacts with you and changes its performance based on the context of what’s happening in the VR experience.