The Occupation: Approach to Development

The amazing guys at White Paper Games talked about their upcoming game The Occupation and discussed their approach to environment design and content production.

We’ve talked with the amazing guys at White Paper Games about their upcoming game The Occupation and discussed their approach to environment design and content production.

Introduction

Our team is made up of 8 people (left to right, top to bottom):

  • Nathaniel Apostol – Audio Design and Composition
  • Robert Beard – Animation
  • Pete Bottomley – Game Design
  • James Burton – Technical Artist & Character Artist
  • Martin Cosens – Programmer
  • Oliver John Farrell – 3D artist
  • Jonny Pickton – AI Design
  • Scott Wells-Foster – 3D artist

We founded White Paper Games in June 2012 although we began work on our first title, Ether One around March of 2011. We released Ether with a core team of 6 and with The Occupation we 2 additional people for support with Animation & AI.

The Occupation

Pete: We’re not generally the type of studio that has a big list of ‘games that we want to work on’. I think the concept naturally came about when we had finished production on Ether One on PS4 and we were discussing what we all wanted to work on next. It’s not really a process of identifying gaps in the market or thinking what would work well in the current climate – it starts more with asking what everyone want’s to do on the project. A few people on the team really wanted to tackle 3D characters (our previous game could have been coined ‘a walking simulator’ because of the lack of realised 3D characters). A strong narrative arc is something we want to put in all of our games – the subject of immigration and people doing potentially terrible acts but being able to empathise with their reasoning for doing so was really appealing to us. From a gameplay perspective the immersive-sim genre (Dishonored, Thief, Deus-Ex) were a huge inspiration and it’s something we wanted to try and tackle. So the game came around with everyone’s specific skillset they wanted to push, then we built the foundation on top of that. I think that’s incredibly important for a studio because when you’re investing 12+ hour days, you want to be working on something that’s pushing your own creative skillset.

Grown-up Narrative

Pete: Again, it’s not really a conscious decision to try and tackle a ‘grown-up’ narrative. Of course, we’re hugely inspired by the architecture around us, the British writers writing about political themes – Orwell is very relevant in this current climate. But more than anything, we’re writing what we want to write. We like the idea of creating world to explore, this means that there are many narrative arcs in the game. We do have a core narrative arc and high-level story beats that we’re wanting to hit, but just as important, are the small character dialogues which may not mean too much out of context, but build up to a higher narrative cohesion. I think when you tackle all the smaller narrative threads first when writing a game, the larger arcs come together pretty quickly. We start out with high-level plot points so that we know the overall story but we may not know the final details. A great example of this is ‘what’s in the file’ – In our announcement trailer for The Occupation, Bowman, throws a folder across the table to Scarlet. People are asking us ‘but what’s in that file?’ – and in all honestly, we don’t know… But that’s OK. The fact that these 2 characters are having a heated argument about something important and that Bowman has something on Scarlet which could change the outcome is the dynamic we’re looking for. When we’ve locked down the smaller story beats, and the world is telling a story, we’ll know exactly what needs to be in the file. I think there’s sometimes a danger of locking large narrative beats too early and later in development, when the story ends up having tweaks to hit the beats or details you need to hit, you either find yourself shoehorning in content to try and tie everything together (which ultimately never really works), or you end up re-arting entire areas and rerecording lots of dialogue which can be incredibly costly for a small team. With this approach, we add the main points we want to hit, then tie down the smaller beats so that it forms the story and the foundation for what we need in the core beats.

Art Style 

OJ: The Art style of The Occupation is inspired by many different art forms. From cinematography, to sculpture, painting and even lighting – we wanted to make sure the game’s visuals felt inviting and rich. Firstly, we looked back at Ether One’s art style, our previous title. The hand painted textures with linework, combined with low poly environments and vibrant lighting. We were really happy with how it turned out. This artstyle came from limitations in team size and skill level, we needed to create something fast with a small art team that looked good. Now, with The Occupation, we wanted to continue what worked well in Ether, but push it to the next level. The hand painted textures are back, but combined with a PBR shader system to help light react with surfaces better and give the player more feedback opposed to flat albedo textures. We have pushed our polycount a lot higher, giving us the opportunity to create more interesting assets and environments. The Occupation feels a lot more grounded and mature, we wanted to have a colour palette that supports the narrative and themes of the game without feeling dull and depressing.

For our lighting we take a lot of inspiration from film as mentioned earlier, but we try to add our own twist to it. It usually begins by using a main ‘hero’ light in an area which we set to stationary. This allows us to get crisper shadows without having to bump the resolution of our meshes’ lightmaps too high, and it allows characters to get lit properly. We tend to use strong colours that reflect the textures of the surfaces of the area, trying to imbue some light bounce manually by placing a lot of static lights that may or may not cast shadows. Once the lighting is in a good place, and it feels vibrant, we use post processing and colour grading to get the final effect we want using a cinematic camera from Unreal as the player’s main camera. Usually, this comprises of pushing or pulling the gain of some colours over others depending on the area. We try to use the colour grading to hit the mood and story beats that each area should portray.

Environment Design

Pete: It’s great that it feels that way because this definitely wasn’t the case! I think we have a very organic process when it comes to environment design. With the systemic gameplay we’re wanting to create, above all else, everything in the world should have logical behaviour and the player should understand the outcomes of all the actions. With that in mind, you need to create a believable environment, and in our case, based on real-world architecture from England in the 1980’s. That being said, real-world environments don’t often create interesting gameplay decisions and if you base a game environment from a real-world location, it can often feel uninteresting, or if the environment is interesting; it feels as though the gameplay has been placed on top rather than feeling part of the world. Myself and OJ go back and forth on this process – he’s great at creating these believable real world locations but then they don’t always necessarily creating interesting gameplay decisions. So more often than not, I’ll only be making small tweaks to the environment for gameplay; things like moving doors & windows, moving the furniture around, all things you’d expect in a normal game production. One of the main considerations are the ‘shuffle’ spaces as we call them. We have a convergence of old architecture and new – places that existed for another purpose, but then had been modernised to fit in work spaces. When you look at these buildings, it’s not always the case that the new architecture fits on the old and you get spaces between them – this is our version of ‘vents’ that are so popular in the immersive sim genre. You can get in between the wall cavities (you can see a short snippet of this in our trailer at 1:22). When we starting blocking out the building, we made sure to add a 1 metre space in between each floor and give lots of room between the old and new architecture. This meant that as we were designing the spaces and I felt that we needed to connect 2 areas with something other than a room or corridor, I could open up a cavity in the wall and create a connecting space. It means that we’re keeping the believable aspect of the world in tact and the tighter spaces never feel forced.

We approached the set pieces in the game a little differently. Since these are needed to convey the high level narrative beats of the game, the environment needs to be more planned and scripted. We return back to this scene many times throughout the game so we have to consider all possible angles of the space.

As you can see from above, we went through many iterations of the table, character, lighting placements in the scene. I’m not sure whether we’re still 100% final on the space layout, but large structural assets won’t be changing since we need to preserve the animation space. We only have 1 animator so having to re-do entire scenes without the use of FaceFX or motion capture isn’t a viable option for our development time. Below is the look we got to through iteration and using Unreal’s Sequencer tools.

1 of 2
1 of 2

Environment Building

OJ: This starts with knowing the space and time we wanted our game to exist in – 1987 Northern England. We all love the Architecture in Manchester and Liverpool and wanted to ground the world based on these locations. Since we work and live here, the reference images we get are right on our doorstep. We wanted to create a number of different buildings with unique architectural styles. This starts with a modular set containing a range of different wall and floor pieces, this is the foundation for blocking out the environments. These are like our canvas. We then add to this more and more assets to bring the scenes to life. When it comes to environment building a lot of thought goes into where everything is placed, does it make sense? Does it feel like it belongs there? These are questions I constantly ask myself as I build.

Characters’ Look

James: I think this comes out of a fairly organic process working with everyone on the team and discussing the overall look and feel of the characters.

Their look is driven by their personalities, which is mirrored in the environments that they inhabit. The style of the characters and the environments was developed together so that everyone can be on the same page and more or less expect certain results once characters are placed in the world. A lot of the style comes from having strongly defined secondary shapes in our models, in the hopes that they’d be readable from a distance. We went through a few iterations before deciding that understating changes in proportions and letting the sculpted shapes and textures do most of the work was the right call. As with the environments, everything is hand painted and fairly rough – we tend to shy away of having surfaces that are too reflective and hope to let the albedo textures do most of the work. After all this, using SSS materials to give skin a softer feel also helps a lot with making the characters feel weighted in the environment.

The toughest challenge so far has been making sure we have lighting that works for both characters and environments in a cohesive manner. We ended up using a lot of lights that only affect characters and are in tune with what is around them in the environment, just to give them that extra pop and readability.

This simple workflow as a whole is born out of limitation (for both characters and environments) – we aren’t a huge studio so we don’t have the resources to spend ages with all our characters, and since I’m the only person taking characters from model, all the way to the final rig, we tried to embrace our limitations and work them into the visual style of the characters themselves by keeping things simple and readable.

The images I supplied hopefully help to paint a picture of what each part of the process looks like – I’ll have to apologise for the gormless looks and open mouths – this is because I sculpt and texture my characters in “rig pose” so that when I set up my facial rig I have a much easier time painting skin weights and have a nice, blank expression to work from.

Animation

Rob: Our characters are all rigged and animated using Maya. We find that Maya works well for us in our workflow, and James our Technical/Character Artist has created a handful of custom scripts to make my life a lot easier!

When working on a new gameplay animation, we find the best thing is to get something in the engine fast, rather than being bothered by the initial quality of the animation. Not caring too much about how it looks and just having something to work with is a much bigger concern for designers and programmers. Fighting my ego on this as an animator can be difficult at times, however it cuts down on any wasted time animating something that might get scrapped if the idea isn’t carried through or changes in some way.

1 of 2

For our cinematics, we block out a stepped version of the animation to an audio file with just dialogue to see the timing and framing throughout the shot. This allows us to mock together a super fast prototype of the cinematic and make various passes with different camera shots, giving us a feel for how it will play out in the end. This also means that making tweaks to timings or poses at this stage is much easier and a lot less painful than later in the process.

As a team; when we’re happy with the final camera pass, a video is rendered out from Unreal and used as reference when I’m working on the animation in Maya. This helps with knowing when the character is on screen and what needs to be animated and what can be omitted to save time. As the animation progresses it can be imported and polished over and over again to see how the shot it looking until its final.

1 of 2

The final polishing pass is facial animation. For this the audio must be finalised and locked down from NJ, our audio designer, so that no edits are to be made once we have started animating lip-sync. This is created in much the same way as the full body animation; a simple pass of the mouth opening and closing will be produced, like when blocking out the character’s poses. Then the finer mouth shapes are worked in from the start to the end of the animation. When animating to audio files in Maya, you’ll find that you’ll need to render out a video of the animation (a Playblast) after editing something in order to check that the animation is in time with the audio, as when scrubbing through the timeline it’s always a little off.

This is the first game the studio has worked on that contains characters both in game and during cinematics, and facial animation is a whole new beast itself. The studio’s first game; Ether One, contained around 30,000 words of dialogue and we are expecting to have at least 120,000 words in-game in The Occupation. This would take months alone to animate for each of the characters, so we have decided to use FaceFX as our solution to speed things up and get some greatly consistent facial animations. Its super simple to use and is a solution for many studios being used in hundreds of games like Watch Dogs, Deus Ex and Dishonored.

For anyone unfamiliar with FaceFX, you take your character and create specific mouth shapes which FaceFX recognises as matches to phonemes in human speech. For example, an open mouth shape, “W” and “P/B/M” shapes etc., it even includes blinks and head rotations to bring more life to your characters

FaceFX then analyses an audio file and breaks it down into its phonemes, which it matches up to the corresponding mouth shapes. This is so fast that we could import hundreds of lines of dialogue a day with very little editing

If you want to tweak the results and have some more control or input to the final animation, then you do have access to edit them. There is a curve editor where you can tweak the curves and timings, as well as the ability to add or remove complete words or phonemes at any point in the animation.

Taking the final animations from FaceFX and implementing them into Unreal Engine 4 is really simple with the use of a plugin that FaceFX make. You export a single file from FaceFX that contains all the animations and audio per character. Using an Animation Blueprint in UE4, you can then call the FaceFX actor, which will play the required animation with the corresponding line of dialogue using the FaceFX plugin.

Using narrative and dialogue is a great way to help flesh out the world we’re creating and bring life to our characters and the relationships they share with each other and the player. This has already saved us weeks of work and is a fantastic solution to producing lip-sync animation on mass for a studio of any size.

Starting out 18 months ago, we were considering using motion capture in our cinematic animations and we had the opportunity to use some hardware called Perception Neuron. It’s a markerless type of motion capture, whereby the suit has lots of small accelerometers to track the movement and rotation of each joint and feeds directly into their own software, giving you a live preview of the data being captured. It’s a much more compact and cheaper alternative for smaller studios like ours vs the Optical tracking systems.

We did have some issues however with the suit being temperamental, certain parts of the body not being recognised and we’d end up with elbows bending backwards and characters floating away. It’s also obviously not a one stop solution to great animation, the clean up process is hugely important as it would be with polishing hand keyed animation.

If the data you capture isn’t very clean, which often it wasn’t for us, then the work required to get some great results was daunting and quite complex to work through.

We’ve currently decided not to use any motion capture in the game, this is because the performance data we were capturing wasn’t to the standard we were hoping and we were looking for something different stylistically.

Materials 

James: Most of the texture work for the environments is made in Photoshop. We tend to try to keep things very simple and infer most of the shapes through slight changes in colour and value. Colour wise, we try keep things grounded and never go with overly saturated colours for most textures, allowing the lighting to inject most of the saturation and colour into the world. The softer feel of the textures comes from our roughness maps – which are edited from the albedo maps and tend to be quite rough as a whole. We try to stay away from having overly shiny surfaces wince it helps to give a softer, pastel feel to materials.

Scott: As an artist, it can be difficult to come on onto a project after production has begun. At my previous studio, I had used a lot of high to low poly baking techniques, as well as using Quixel Suite to texture and convert normal maps. This helped me to learn the fundamentals of creating believable materials within UE4 but it also disconnected me from my existing traditional art knowledge and skills. This was mostly due to my over reliance on “task-specific” software instead of using Photoshop to put emphasis on hand painting textures. When I began work at White Paper, I found this disconnect especially evident in my hand-painted texturing skills as you can see in this example.

This was one of the first textures I painted. I was used to not having much information on the albedo (colour and some surface details only) with the roughness playing the biggest role in determining how the material would react in the world.

The problem with this style is that it’s very “noisy” in the sense that there is not a lot of texture definition on the albedo maps and the brushes used to create variation and texture are realistic and full of high frequency details. It took some time to get my textures to a consistent standard whilst working alongside OJ.

I worked beside him daily, and after 2 months of deconstructing assets and getting my textures to feel consistent, I felt ready to add my art into the game. For me this was the toughest task I’ve had to learn as an artist. It exposed some of my weaknesses and fundamentally changed how I approach rendering textures.

I achieved the painting technique by using the same square brushes OJ uses with much broader and thicker strokes. We used these to add confidence and sharpness to the structure of the object (e.g highlights and details that are painted into the albedo). This “shape first” way of thinking also came through in the way we treated our meshes; rather than creating highly detailed and realistic models we focus on the use of more defined silhouettes.

Here you can see a more up to date texture that shows a fair deal of shape definition within the albedo texture. “Noise” is added when needed, for example, in metal, to add more surface depth. For the most part, we use pastel colours mixed with other tones to create interest and to give each object its own unique hand-painted strokes.

For example, here I have used some strong brush strokes in my roughness map that will be visible when light reflects of the surface as if someone painted it all by hand.

This was all to align the vision of a world that was realistic in tone but seen through the prism of a hand painted style. We did this also to develop the art style that was previously seen in Ether One. Although they are both separate titles, they exist in the same universe and we wanted an evolution of style by taking advantage of UE4’s PBR renderer in a way that emphasised the hand painted qualities of our work.

The Usage of Unreal Engine 4

Pete: I can’t recall ever having a conversation about which engine we were going to use. I don’t think we’d be able to achieve what we can in any other engine. We made the move from UE3 to UE4 with our previous game, Ether One which allowed us to ship on PS4. Since then, we’ve been prototyping the new game – the things we’re able to achieve with such a small team using UE4 have definitely influenced the game design. I’m a firm believer in technical limitation influencing design decisions We’re able to get complex gameplay systems & AI online incredibly quickly without the need for C++ (at least without the wrapped of Blueprints). We currently only have 1 programmer on the team so their time is incredibly valuable. The main tools that have helped achieve our game’s design in UE4 are; Blueprints (obviously!) Sequencer, AI Behaviour Trees, and the new Audio engine.

We knew Sequencer was in the works before we started the game and it influenced a lot of the structural design decisions of the game. Sequencer for those that don’t know, is a new cinematic tool to replace the old Matinee system. It allows you to set real-world lens types, give you great camera control options and overall brings a level of quality to the game which was unachievable before – our entire trailer was recorded with the sequencer camera to give you an idea.

Blueprints are the powerful visual scripting tool in UE4 based on C++ which I’m sure everyone knows about by now. With the gameplay approach being systemic and ‘emergent’, it requires a wide selection gameplay systems speaking to each other in a consistent way. Allowing designers to code these systems, at least from a prototype standpoint has sped up our process incredibly. We think a lot of these systems stay in Blueprints. Unreal also now has a Blueprint to C++ converter which cooks down your Blueprints into C++ classes to make them faster.

AI Behaviour trees are AWESOME! There’s a steep learning curve to the theory of how they work but it’s more an understanding of AI concepts in general rather than Unreal’s implementation. Myself and another AI designer, Jonny, went through all the Unreal livestreams on AI systems, we also read books such as ‘AI Pro’ – Once we felt as though we had the core principles in place, we set on creating AI. We wanted something a little different from our AI; our personal benchmark was to try and hit the level of quality of Elizabeth from Bioshock. The GDC talks Irrational did on this were incredibly invaluable (see Shawn Robertson’s breakdown here). We need our AI’s to understand the world, respond to player action and make interesting decisions based on their motivations, stats and current jobs. If an NPC is getting stressed out by certain events for example, they’ll smoke a cigarette. If tired from work, they’ll go and buy a coffee from a vending machine or sit down. Using Blueprints in Behaviour trees allows you to code small tasks and then run them in a decision tree along with running environmental query systems (EQS) – this is great because you can prototype your AI by moving and arrange these ‘task’ blocks of code in different ways to get varying results from your AI – I’m sure we could write a full article on this alone so if anyone has any questions about our approach please do ask!

The team of White Paper Games

Interview conducted by Kirill Tokarev

Follow 80.lv on Facebook, Twitter and Instagram

Join discussion

Comments 0

    You might also like

    We need your consent

    We use cookies on this website to make your browsing experience better. By using the site you agree to our use of cookies.Learn more