logo80lv
Articlesclick_arrow
Research
Talentsclick_arrow
Events
Workshops
Aboutclick_arrow
profile_loginLogIn

Developing SWIFT, Part 2: Characters, Environments & Animations

The developers of SWIFT have shared a massive, 2-part breakdown, discussing all the aspects of the game. In the second part, the team has talked about environments, characters, tools they used in production, and more.

In case you missed it

You may find these articles interesting

Tools

80.lv: What tools did you use during production? How did you choose the right engine? What tools did you use for characters, environments, textures, effects?

Léo-Marambat Patinote: For our game engine, we chose Unity, mainly because Nicolas and I were more experienced with it than Unreal. Our artists preferred Unreal, as it is considered friendlier to artists.

The choice was hard, but we prioritized the comfort of the programming team as there were some big challenges ahead of us. It was our first competitive first-person game and more importantly, our first multiplayer game. We’ve been working with Unity for around 5 years and this familiarity allowed us to quickly establish the basics like 3Cs and free up more time to tackle more unfamiliar topics like networking and online features.

Pierre-Louis Mabire: Unity was quite a challenge for the art team. We only experienced it through mobile game projects and were way more used to Unreal Engine 4. We wanted to be sure to have a good and efficient workflow before fully diving into production. It took around 2 months and it included creating a proper master shader, making a few tools (auto prefab creator, a prefab swapper…), and exploring Unity’s features to make our pipeline and identify potential restrictions. 

We were a bit scared of Unity’s rendering but with the HDRP (High Definition Render Pipeline) we had the features to achieve a realistic PBR render and Unity surprised us on a few points. The GPU light baking was way faster than UE4’s and the result was pretty. 

Once the settings were finely tuned a correct bake took around 15 minutes and twice that for a “release quality” build, it let us iterate much more than expected. To shorten even more the baking time (and the lightmaps) we used light probes proxy volumes on our larger meshes. It creates a 3D grid of interpolated Light Probes and light meshes with multiple probes at the same time, with enough resolution it fakes pretty well the baking.

We chose to only build GI to lighten the build’s weight and keep dynamic shadows. So we also used light probes to light characters and tiny unbaked props, it really helps integrate characters in each different location. Dynamic shadows were one of our biggest performance issues, in order to optimize we went through each mesh present in the scene and deactivated the shadow if it was not worth it.

Another interrogation was about shader editing since Unity’s one was quite incomplete, but with the help of the Amplify addon, we had a nice nodal editor. It works almost the same as UE’s one but with a more fluid feeling. We went on a function-oriented pipeline which let us easily tweak features over multiple shaders at the same time.

We used Blender for all the modeling and also a lot for the concepts. Most of our assets were made using trimsheet and so we used the “Ultimate Trim UV” add-on a lot. You first set your sheet’s size and then you’ll just have to press one button to perfectly unwrap your mesh on the desired trim. We also used the Dream UV addon to instantly texture generic meshes.

Most of our textures were done with Substance 3D Designer, we had (nearly) no unique painted texture for the environment. Everything had generic textures blended with vertex painting and/or masks.

Characters were sculpted in ZBrush, retopologized with 3D Coat, and textured in Substance 3D Painter. All the team-colored parts were recolored in-game through shader color tweaking. 

Environments

80.lv: Could you also discuss the game’s environments? How did you design levels and use verticality? Environments in this case don’t just tell a story but rather define the action and gameplay, right? How did you use different forms and elements?

Nils Nerson: While designing the levels we kept 3 things in mind: they had to somehow challenge the player strategically, they had to be fun and intuitive to move in but they also had to limit the players’ movements without being frustrating.

One of the first things we realized when we began testing the game was that players intuitively climbed every wall and always went for the roofs to move around, as having the high ground in SWIFT is a huge advantage in combat, but is also very useful when you’re trying to chase or catch someone. Players just felt safer when they were up on the roofs.

Our map’s main axes got two vertical levels : roof and floor. This is a great tool to involve the players in some nice chase scenes.

The problem is that designing levels only around rooftops has a lot of problematic gameplay-wise and environment-wise.

Gameplay-wise, if we wanted to design levels that focused on rooftops, that meant that we had to ask ourselves where do those roofs come from: from buildings? Then how tall are the buildings? What happens if a player falls from a roof? Do they just have to climb back up? Then what’s the point of having these dead areas below the roofs? Then maybe the roofs are just floating platforms? But then what happens if a player falls? Falling into a killing void can be very frustrating, especially when every second counts like in SWIFT.

One of the indoor LD iterations, creating different gameplay situations than the rooftops.

To sum up: if we made the roofs the main gameplay areas, our levels would have had a lot of dead useless areas, or be very frustrating because of the limits we would have had to find to keep the players inbound.

We had to try our hardest to make the floor matter strategically and keep the number of accessible roofs to a minimum, to control exactly where the players could reach a high place. So we tested a lot of different layouts, to see which ones were fun to play in and where players didn’t spend all their time running on the roofs. 

This resulted in the current map we have. A trick we used to balance the importance of height was to build the map on a cliff so that the upper level of the lowest point of the cliff would be a low height compared to the highest point of the cliff.

By placing small buildings and platforms, we were able to create fun little platforming paths that would be super easy to move through and would create cool environments for dynamic fights.

Alexandre-Villiers Moriamé: As a Game Artist, you have to think about how to deliver the most beautiful assets for your game while staying devoted to the game you are working on. You're not an artist working on beautiful pieces of art for yourself, you are a craftsperson who participates in something bigger than your personal vision, a complete experience that works only if everybody worked more for the game than for himself.

Like the level design, the environment was an iterative process of design, production, test in the engine, again and again… ok in our case we struggled a lot!  

Why? Because during the first half of the production, we were too focused on the question “ what is the aesthetic of SWIFT?”. We made a lot of concepts, a lot of early assets but we were missing the real problem! 

The real question wasn’t about what the game will look like in terms of colors or aesthetics, it was a completely new problem for us which is at the center of the Game: the parkour.

For the first time, we had to make a complex map which is more like a playground with the interior, exterior, verticality nothing like a linear level or even a more open level of a single-player game. 

We had to think about how to make the environment believable since there is no closed door, lots of openings in buildings, and strange paths all over the map. So as we started to produce the environment, we noticed that we had to rebuild a completely new story for the art direction of the environment. It was like starting a new project and thanks to lots of evening discussions, and hard work with Pierre-Louis Mabire we managed to think back on the environment to clarify this point.

So here is some advice from a young game artist freshly graduated: try to identify the challenge of your art direction before digging in random, aesthetically oriented concept art.

Thanks to our early work, we didn’t struggle on readability that much since it was something that we were prepared to face.

1 of 2

Pierre-Louis Mabire: One of the biggest challenges was making something that looked good without deviating from the level design blocking. Indeed we had more or less 20 cm of margin, which is quite small. This led us to a really blocky and rigid aesthetic you can find in other competitive games such as Counter-Strike, and it was the complete opposite of what we were aiming for. We wanted to have a rich and immersive environment, as much as a fast-paced competitive game allows it.

Work on the surface and small volumes
 
We started by making our flat walls more interesting by working on ornamentation, pattern, and geometrical bas-relief. We used a lot of patterned metal to indicate edges, doors, and platform, it gave an architectural homogeneity around the map and the high specular contrast with the stone made every path way more readable. 

Sand Everywhere

To anchor our map in its world and to have a feel of an ancestral city we put sand everywhere, even inside buildings. It helped us in a few other aspects. We could paint sand on every asset or floor part that was popping out too much. With a few sandpile meshes, we were able to dress every transition between floor and walls, it really helped us break the blocky look. 

Add movement and responsiveness to the environment
 
Completely static images tend to look fake or unnatural. To make our environment alive we needed movement, the first thing we had were fabrics with physics. Firstly the wind movement brought a lot of life everywhere (yes we put fabrics everywhere, such as sand) but the thing that really made a difference was the physics. Being able to interact with the environment really helps feel a part of it.

With the same intention, we had some destructible jars and crates which brings another immersive point, the map doesn’t look the same at the end of the game depending on your actions.

We didn't have the time to produce a lot of assets so we tried to find generic ways to make fabrics. We made a shader that let us have a realistic and varied look while reducing as much as possible the production time.

Combining tileable maps, trimsheet-RGB masks, and vertex painting we were able to produce a lot of fabric pieces really quickly while using only generic textures. We also colored them in runtime to indicate the teams’ colors, it let us give the player the possibility to change fabrics color as to their liking for accessibility purposes. 

Enrich the non-playable area

Since we were not able to give too much detail and volumes to the playable area without destroying the level design or the readability we turned to the rest: the highwall, roofs, and landscape.

Characters

80.lv: Please tell us about the characters here. How did you design your heroes?

Antoine Destailleurs: For the characters, we quickly established that we wanted three different classes. All of them should be fast to recognize since you have to adapt your playstyle to their passives.

We went for three archetypes. An agile one, the Raider, who got movement-based boosts, fit for Orb Capture.

An aggressive one, the Hunter, can immediately re-attack after a parry or a kill. 

And finally, a warden, the Guardian, who can share enemies' locations with his mates once he spots them.

Our main problem there was that, due to a lack of time and animators we could only create one common base body. Meaning that it would be hard to recognize silhouettes because all of our characters would have the same proportions.

We still did our best to break our silhouettes by using different elements.

First, visual language. We designed all our classes with primitive shapes. Squares for the Guardian, giving him a stable and robust appearance, Spiky triangles for the Hunter since he's the most aggressive class, and finally Circles for the Raider.

Secondly, the cosmic trails on the head of our characters. Those were really important and part of the gameplay. We wanted the player to identify any class simply by the look of their trail. For example, the Raider has the longest trail since he’s the one who’s supposed to capture the orb. Having this long trail meant that enemies could easily track him.

Contrary to the Guardian who's supposed to defend the team's altar and give information to his mates. We wanted this role to be more discrete and so we gave him the shortest trail.

Thirdly, to break their silhouettes, we also used shoulder pads. It's the type of equipment that can really give more presence to any character.

Therefore, we gave the Guardian two shoulder pads to emphasize his stability. One big spiky shoulder pad for the Hunter and finally we didn't give any shoulder pads to the Raider making him look lighter and faster.

Fourthly, we used masks, playing with their shapes and the number of eyes on them. One eye on the Raider, to emphasize on his tunnel vision to catch the enemy orb. Three eyes on the Hunter to remind the triangle shapes and the aggressivity linked to it and finally four-diamond eyes on the Guardian, reminding that he's the character who focuses on what he sees.

And finally, we used simulated capes to break their silhouettes even more and give more fluidity to their movement and animations.

Our characters ended up being recognizable in third-person. But since SWIFT uses the first-person point of view, our characters also had to be recognizable in first-person. For efficiency purposes, we still used the same first-person arm mesh for our three characters so we had to find a different way to recognize them.

To do so, we designed a unique sword for each of our characters. The swords being always visible, it was enough to let the player know which class they were playing. Of course we also heavily relied on the interface to help the identification.

You might have seen that the Guardian has the same jacket as the Raider and the same pants and boots as the Hunter. He was the last character we made. We knew we were short on time, and we had to quickly redesign the Guardian using existing meshes to have it integrated into the game on time.

So we only focused on our main visual elements to differentiate him. Using unique shoulder pads, mask, trail, and cape. And, it worked, it wouldn't have made it for a professional game but for a student production, it was enough.

Animations

80.lv: How did you work on animations? How did you find the right rhythm and create different additional effects? How did you code movement to provide a smooth experience?

Nicolas Ceriani: Animations play a crucial role in SWIFT. Obviously, they help sell our universe and the fantasy of our characters being these cool, agile fighters hopping around with style. But most importantly, they’re here to serve the gameplay and allow players to make these split-second decisions by delivering clear information on the characters’ states. 

The three most important pieces of information to communicate were:

  • What action is this character performing?
  • What is he heading towards?
  • What is he looking at? 

To answer these, we made sure to create very distinctive, sometimes even exaggerated, poses for each action. We also created 8-directional variations for each movement animation, whether it is running, jumping, dodging, etc. Finally, some procedural animation work allowed us to always orient the head towards the camera direction of the player, which means that by combining all these cues, you can kind of read a player’s movement and anticipate their next moves to take advantage during a duel.

1 of 2

Since I was in charge of both animation and 3C programming, it was much easier to iterate on both aspects to nail the feel and pacing we were looking for. Still, it was quite challenging to find the right balance between gameplay responsiveness and the smoothness and elegance of the characters' movement. The main trick I used for this was to land the main pose in the first few frames of animation, followed by a slower phase where the character stabilizes smoothly, which can safely be canceled by another movement input since we’ve already communicated the action.

Interface

80.lv: Another key point some developers tend to underestimate is the game's interface and how it affects the final perception. How did you work on this part? How did you make sure your players have all the needed info right in front of them?

Jules-Ismaël Tien: Since the very beginning of the project, we worked on our UI & UX. You cannot fully give your best and have a good time in a competitive game if you don’t know the most capital information about the state of the current session. We quickly faced a huge difficulty: Knowing what to show, and how to show it, without breaking the game’s immersion or being too oppressive.

First, we chose to structure our info. The Top of the HUD is dedicated to the Game’s state, like Score, Timer, the two team’s Flag state, the two Team’s characters & Powers, etc.

With our game’s pace, it wasn’t reasonable to keep ‘tactical’ info (like the enemy team’s composition) hidden, as it would create too much cognitive noise when trying to decipher which power the enemy has. We prefer the player to focus on the instant. 

That’s why in the middle of the screen, we put ephemeral info that wouldn’t bother the player for too long: Cooldowns, Temporary effects due to Powers, and, in-between the middle and the top of the screen, Announcements that concerns the game’s state: Flag capture, Flag retrieve, Scores…You need to structure your info depending on their relevance at the moment they’re appearing.

Finally, on the bottom, there’s the Player’s personal info, like its Portrait and Nickname (both useful for self-recognition but also when in Spectator Mode), or its Power icon and cooldown. Every element linked to the gameplay have some animations on them: If a parameter changes, like the activation of a Power, the player needs to see the HUD evolve to feedback this. But we couldn’t do this by changing details, as the player cannot afford the time to find them. 

However, movement is very easy to notice, even when you’re focused on something. That way, through color, scale, and position animations, we tried to ensure our player would notice that “something’s happening on that side of the screen”. And as we keep a clear structure with each corner and side of the screen dedicated to feedback one type of mechanic only (Game Session State, Rewards, Player’s State), they quickly learn what’s happening where.

But it took a lot of steps to implement a UI that was both aesthetic, not too large, with understandable feedback even when not looking at it. 

Alexandre-Villiers Moriamé: This preparative work gives us a lot of information, and saves us time while working on the art side of it. While working on the placeholder UI, we began to work on concept Art to see more clearly what the HUD and other menus will look like. We put a lot of effort into updating the visual of the Ui as we found problems/ ways of improvement.

When you work on a game that relates to communication, strategy, and reflexes, you need to put a lot of resources into the User Interface. We needed to keep a visually appealing interface without messing with readability.

So, making the final version of the interface wasn’t focused on making the HUD flourish and replacing the assets. It was a very iterative process where I had to communicate closely with Jules-Ismaël to balance between elegantly flourished assets and instinctively communicative visual information. The colors and shapes of the UI changed a lot during this process as the game was growing. And thanks to the soberness of the environment, we had some flexibility for all of this stuff.

1 of 2

Jules-Ismaël Tien: And throughout Alexandre’s assets integration, we updated our various UI prefabs, tweaking the animation, adding VFX to them… If something’s happening in SWIFT, be it trivial or essential, the UI needs to throw gazillions of visual feedbacks at you, only with slight variations in intensity. Because you always underestimate the player’s capacity of not noticing what you’re showing them.

Of course, the vision isn’t the only sense that counts in matters of feedback. We also worked with care on the Sound Design, also respecting our direction of having thematics linked to different gameplay mechanics (concerning either the Game Rules, the player itself, or pure game feel). Each Sound has to feel different, with intensity and tonality varying depending on the gravity and importance of the event. As the game’s quite demanding in matters of attention and skill, we noticed that playtesters were keen to tryhard: And this ‘Focus Mode’ promptly brings them to use their reptilian brain, helping them gain reflexes once they see & hear only a glimpse of the associated event’s feedback.

Of course, making the game understandable and feel-good needed to go through a lot of Quality of Life features. The Top part of the HUD evolved throughout the production to reflect those improvements. For example, eliminating an enemy. The player’s rewarded with the killfeed showing everyone its character, the enemy character, both of their names, and a small personal message (linked to our Rating system, a personal Score) that says ‘You eliminated xxx: +50’. 

Well, by showing that, the player got confirmation on many important information: The Nickname of the eliminated player, his Role+Power (Character)... Thus they know who’s left in the enemy team, what kind of new challenges they’ll have while the eliminated one will respawn, etc. As long as you show a piece of information, it needs to be both nice and interesting to witness, just like game mechanics in general. And Unity’s possibilities with UI are pretty fine, so it wasn’t too bad to prototype and iterate on the HUD until we got enough juice for relevant actions. 

Of course, depending on our playtester’s profile, there’s a broad range of different feedbacks we’ve got. Some want to know everything, others do not care and want immersion above all, most of them won’t notice most of the various information you’re displaying.

We felt that the UI should be some kind of brand image of any game: Coherent both in gameplay, visual & narrative aesthetics. As our game’s fast, we wanted to make it feel dynamic. At some point, after a lot of tweaks and iterations to find the right balance between understanding the gameplay and not bothering the player’s immersion, we had to trust our sense of aesthetics and lock it.

Optimization

80.lv: Let’s also discuss the technical side of things. How did you optimize your project and make everything’s as smooth as possible?

Léo-Marambat Patinote: From the very beginning of the project, we made sure to regularly profile our build to monitor any performance loss and be able to act on it before we committed to anything. Here are a few techniques that we used to get the most out of Unity HDRP:

  • We tried to use the minimum amount of shader variants to limit shader switching, which is hard on the CPU. The rendering philosophy changed with HDRP and having many materials is no longer taboo, the expensive operation is now changing shaders.
  • We spent a long time testing different baked occlusion culling settings to cull the most meshes possible while limiting artifacts.
  • We also baked all static geometry but we didn’t reuse materials enough for it to be really useful, and Unity doesn’t atlas textures automatically so we had a lot of draw calls for our environment and that took up the most of the rendering time. 

To mitigate this last point, we should have created merged LODs for our environments but without an external tool like Simplygon it would have taken too much time. At the time, Simplygon didn’t support HDRP. We have LODs for our props but that’s a very small part of our frame time.

Gameplay code doesn’t impact frame time at all in our case, and the assets that we used for things like cloth and dynamic bones were multithreaded and really performant, so most of our optimization efforts went to rendering. As with most Unity games, we were bottlenecked CPU side by our draw calls.

Conclusion

80.lv: One of the main challenges was the limited budget. What decisions did you make to deliver everything in time? What was the most time-consuming task? What would you do differently next time?

Nils Nerson: SWIFT is a graduation project, so we didn’t really have a budget, but we had limited time. We quickly realized that since we were only 7, we didn’t have a lot of space for error, we knew that every step we took had to be forward and that we couldn’t really afford to scrap anything. On the design and programming side of the project, this worked pretty well since we had a fairly clear vision of what we wanted to make from the start. We just made sure to test the game a lot and to have a stable version at the end of each week. 

On the art side though, I think we spent too much time drawing concepts of what we wanted SWIFT to look like. Each concept looked great but wasn’t exactly what we visualized, so we just made more to try and nail this vision. I think we spent a few months like that, not really knowing what we were going to do. I definitely wouldn’t do the same if I could go back in time, I would push for our artists to produce assets for the engine after a few concepts because it’s what we did during the last 2 months of the project and it allowed us to iterate a lot more than all the concepts we did before. 

The SWIFT Team

Interview conducted by Arti Sergeev

Join discussion

Comments 1

  • Ryan Alex

    <3

    0

    Ryan Alex

    ·3 years ago·

You might also like

We need your consent

We use cookies on this website to make your browsing experience better. By using the site you agree to our use of cookies.Learn more