Chris Goodall, Senior Animator at Ninja Theory, talked about his work on the animation of the Hellblade’s enemies and main hero.
Hellblade is the game that proved an indie game can provide AAA quality, but there’s one special thing about it we should all discuss. The game by Ninja Theory showed the next level of animation, leaving all gamers and developers speechless. Today, we are extremely happy to present our exclusive interview with Senior Animator Chris Goodall about his work on this incredible project.
My career path, I would suspect, originated in the same way as many others: After watching Jurassic Park and Toy Story as a child I was completely fascinated by what I saw. I had no idea what it was, only that it involved some kind of art and computers. Throughout school I gravitated towards these subjects, usually drawing characters on paper and then scanning them into a computer to try and colour them. Over time I learnt that there were different professions in the field I was interested in, such as Character and Environment Artists, Animators, Sound Designers etc.
In college I decided I wanted to focus on animation and so went on to study at the University of Bradford, studying Computer Animation and Special FX. After graduating and comparing my work with others in the industry, I had the realisation that my animation skills were pretty terrible. My degree gave me a general overview of the different areas of the 3D pipeline, such as modelling, texturing, rigging, animation etc. The problem is that I became a “Jack of all trades, master of none”. My skills were mediocre across a wide area and I really needed to focus on animation. For the year following graduation, I decided to work part-time and self study, learning everything I could about animation, reading books, listening to podcasts and scouring forums online.
After updating my demo reel and a few failed applications, I started to work on my second demo reel. One day is was listening to the “Reanimators Podcast”, which was a show where bunch of video game animators would get together and talk about the industry. On this one particular episode, they had a guest on called Espen who worked at a company called Ninja Theory. I had never heard of Ninja Theory, so decided to look them up online. After finding their website and the careers page, I discovered they were advertising for an Animation Internship. My new demo reel wasn’t ready yet and I really hesitated to apply. Having nothing to lose, I decided to take a shot and see what happens. Fast forward a couple of weeks, I went for an interview and was given a position.
The internship was initially for 3 months, and I got to work on Enslaved: Odyssey to The West. During this time, I was also told that they were working on the next Devil May Cry game. This was super exciting, I was a big fan and the possibility that I could get to work on it was amazing. After my internship ended, I was offered a position as Junior Animator, which I gladly accepted, and I’ve been there ever since!
Along with Enslaved and DmC, I got to work on some DLC content (DmC: Vergil’s Downfall), Disney Infinity 2, 3, and most recently Hellblade.
Stylistically, Hellblade is the most realistic looking game we’ve made and motion capture became an essential tool in achieving the realism we wanted. This was the first project where we used a heavy foundation of motion capture for gameplay, and not just cinematics. This along with budget constraints led us to building our own motion capture studio in the boardroom, which allowed us to capture anything we needed at anytime. This, combined with having our actress Melina on hand, meant that we had a great deal of flexibility to try out new ideas.
I find that my animation tasks start with the character design, before I even get hold of the rig. What I mean by that is it’s important to try and get involved quite early with other departments in the design process to figure out if the character can physically perform what it needs to do. An example of this was with Fenrir, where after getting the rough version of the character model and creating some poses, I found things like the legs and claws were too short and small. This meant that the character couldn’t reach out in front of it’s head to do a strike, and had very little range. After a bit of iteration with our awesome Character Artist, Jeff, we managed to find something together that worked much better
So, my first task is creating a few poses with the character to try and figure out the personality and physical limits. After that I’ll usually do some kind of movement cycle like a walk or a run. I find that seeing the character actually moving in 3D really gives a good sense of what we’re going for and what to expect in the game. If we get this part right then the rest usually comes together later on.
My workflow from this point is pretty common and involves a largely iterative process. The first thing I do is gather video reference, which usually involves me getting in front of a camera somewhere quiet and isolated in the office so that I can make a fool of myself in peace. This step is mainly so I have something to refer to when breaking down the mechanics of the motion I’m going for, and is in no way meant to be copied frame by frame (if you’ve seen my reference you’d know why).
The next thing is to get a rough “first-pass” version of my idea in the game engine as fast as possible so I can see if it’s going to work. As time is precious, it’s important not to waste it by taking the animation too far only to have to start all over again because it doesn’t work in the game. This gives everyone a constant working prototype that allows other parts of the pipeline to start their first pass work, such as VFX and audio.
Once this part is approved and everyone is happy, I’ll keep iterating on it and improving the animation quality.
I think a common misconception is that as an animator, your work is done when you export an animation from Maya. It is passed along to somebody else in the chain, and you may see it pop up in the engine some time later. For me, the task isn’t done until I have implemented the animation in engine and I’ve seen it, played it, and everyone is satisfied with the result. Using UE4 gives a lot more freedom and control to artists, without the need for extensive coding knowledge. The bar for entry is quite low, but at the same time it can be as complex as you need it to be. This is great because it allows me to control exactly what the end result look like. I know the slight technical hurdle may be off putting for some but it has never been easier to really get in there a make something work exactly how you want. It also helps to create a better understanding of the whole pipeline, and allows me to make better animation choices by knowing the different ways I can implement something.
Other than learning the engine and pipeline, another crucial thing would be to streamline the software you’re using. This means creating useful scripts, hotkeys, and removing anything that’s a distraction so you can work more efficiently. I mean, does anyone actually use the viewcube in Maya? If you don’t use it, hide it. The plus side is that you’ll also have more screen space to see what you’re doing while animating, win!
The hardest part about animating the enemies was trying to keep them believable and consistent with the quality that was set with Senua. Unlike Senua, we couldn’t always use motion capture as a base due to them sometimes being quite fantastical. Fenrir is a good example of this, which is a giant part wolf, part boar like creature. This meant that I had to use video reference wherever possible to help keep things looking realistic and grounded, but also leave room for creative freedom in order to keep things unique and interesting.
The cooperation between Melina and I usually involved me asking her to run around the motion capture stage while pretending to be on fire or something similar, and repeating actions over and over and over again until they were right. I dealt mainly with anything gameplay focussed, and Tameem would step in to direct anything more emotionally charged or important to the story.
It was challenging at times but also a lot of fun. As we were doing things on a small budget, we ended up finding creative solutions to get the results we needed. A good example of this was where we needed to capture Senua wading through chest deep water. As we didn’t have access to a pool of water, or even knew if it was possible to capture someone in a pool, we had to find a more practical solution. We ended up wrapping some giant rubber resistance bands around Melina, and got her to walk around while I pulled on them to slow her down and create resistance. The whole process was quite tiring and hilarious to watch, but we ended up with surprisingly decent results.
Another major part of Senua’s gameplay was having lots of different and unique movement cycles for the different situations she was in. We captured things like loss of sight, moving through heavy smoke, being injured, wading through water, panic, being on fire, fear etc. All these different movement contexts really added to the game and helped communicate what Senua was going through in the different parts of her journey.
Having a small development team with ambitious goals meant that the greatest challenge was always going to be about how we can work smarter and faster to compensate for the deficit in resources. This meant that we were constantly refining the development pipeline, trying new ideas and looking for ways to improve our individual workflows. This allowed us to quickly implement working prototypes without investing too much time into them, and accurately assess if the idea was something we wanted to keep pursuing or if we should go in another direction. As challenging as this aspect was, it was also enjoyable as it forced everyone to become much better developers.
The future of animation
I think the quality of animation in games is constantly on the rise, with improvements in technology and game engines allowing for higher and higher fidelity, closing the gap with movies quite rapidly. The biggest area that needs refining is with the traditional state machine based animation networks, which can result in a spaghetti network of hundreds, sometimes thousands of small animation clips. Creating these small clips and networks is massively time consuming, so I’ve been really interested in the Motion Matching technology that Ubisoft have been developing to try and reduced this. The results look really promising and I can’t wait to see how far it goes. Also, finding ways to reuse animation assets to cut down on unique asset creation would help reduce strain on resources. In my experience, managing to do more with less seems to be a smart way to approach things.