Trying to steal Vray's thunder.
I'm gonna wait for Steam version
Rob Nally shared some tips on the production of cel-shaded action games with Unity.
Hey, I’m Rob Nally. I’m a freelance 2D/3D artist and teacher from Philadelphia. I teach 3D modeling, game design and now, game programming, at Montgomery County Community College just outside of Philly. I’m also working on a PC and VR game called The Come Up.
I’ve always been hooked on art, drawing and video games. When I was 6 I remember playing a rented version of Ghosts ‘N Ghoblins for 8 straight hours and scaring my grandmother. When I was in the seventh grade, my grandmother passed away and my grandfather, who taught and lived at Villanova, gave my dad his house in exchange for a retirement stipend. That transition took us out of Philly and led my mother to work at Villanova. She kind of hated it and I knew it, but it allowed me and my sibling to go there for free and it’s a good school so we didn’t pass it up. I learned how to program there and I worked for several years after graduation as an IT guy. I grew to hate the politics of my job and I had always kind of promised myself when I went to Villanova that I would go back to school one day to study art. I feel like my career kind of started when I left IT and went to study Game Art at Full Sail. Since then I’ve worked on whatever I can to learn and stay in this field. After Full Sail, I taught at ID Tech, worked on a multiplayer PC game, worked on a mobile game for a startup, freelanced for Wolfquest and helped with a number of medical applications and animations.
About the project
My goal for the project is to try to make a great game that could educate and move people. I also just want to use the project as a learning process so I would like to take the game to conferences, put it on Kickstarter, try to get a publisher for it and submit it for competitions and awards. I’d like to make something that could hopefully bring some recognition back to the great Philadelphia gaming community and Philly in general. We’re a city that is often looked down upon and I think that has forged a chip on the shoulder for some people here, myself included.
Early UDK experiment
Game Jam with QuadraTron Games screenshot
When I started the project I was just looking to make an environment for the 3D modeling class I teach. I started in UDK and I experimented with scripting and cameras there. Then I did a game jam with Quadratron Games and we experimented with procedural generation and some abilities. Later I got a VR headset and I worked with Ilja Kivikangas, who made Windlands, and I fleshed out more of a level and did more experiments with VR. I tried out procedural generation again, though for VR, with Mike Leach, but run-time procedural generation with VR was too intensive, at least in the earlier days of VR. It also didn’t allow for baked lighting or global illumination. Some of the environments in the game come from those procedural generation experiments though, they are just customized and aren’t generated in real-time. The game also use to be much more of a first-person experience because I had started in UDK which was primarily a FPS engine then and because I had been developing the game for VR, which was usually a first-person experience at that point as well. The more I tested a third person camera and control through the more I fell in love with it. Recently, I’ve been experimenting with PuppetMaster and trying to get that to work.
Tiles for procedural generation
Early procedural generation test
Lighting and first person testing
Post process effect testing
Level derived from procedural generation experiments and customized
Early on in the project I knew I wanted something that was centered on exploration. I thought a game that was based on exploration and allowed you to get new abilities which allowed you to explore further and faster would be cool. I was definitely influenced by Minecraft early on and that’s what sparked a lot of the procedural generation experiments. Creating enemies, combat gameplay, and story are some of the other big things on the radar now. I’m also trying to improve player learning in the game and I have a bunch of ideas for new gameplay-centric levels and new aesthetically driven levels.
Third person camera experiment
I’ve learned to try to get from A to B as fast as possible. Once a level or game is “figured out” it can always be iterated on. If you look closely at my textures or models there are lots of problems with things like seams. Post-processing effects and a proper color pallet are a great, quick way to get close to the look you want without spending a ton of time on UV’ing, sculpting and texturing.
Early post processing testing had a much higher Depth of Field, but it doesn’t help the player see into the distance and exploration is a key part of the game
For post-processing, I’m using Unity’s Legacy Effects. The new Post Processing Stack doesn’t support everything I need out-of-the-box, like Edge Detection, so I haven’t tried to make the transition to that as of yet. Post-processing effects I am using are:
- edge detection (I love edge detection)
- tilt shift blurring (great way to fake depth of field at times without disrupting gameplay)
- vignette darkening (kept it subtle)
- screen space ambient occlusion
- crease shading
- some color correction to desaturate
As far as color goes, I think having a good color palette that can incorporate 5 or so colors works well for characters and environments. 3 Colors could be complimentary with 2 contrasting colors. My color palette will usually come from reference gathering. I like, favorite or save images I really love from around the web pretty much daily and I’m pretty strict with the curation.
Early on my friend Jon Criner imparted on me how important lighting is to making something look good. One key is to have your Ambient Color be within your color pallet. For getting colors correct, I will also switch back and forth between 2D and 3D a lot. If I’m having trouble figuring out a color scheme in 3D I’ll switch to Photoshop to paint a screenshot and try to “figure it out”.
Color development process for one of the levels
Open world experiments
When I first started making environments in school I put way too much time into modeling up front and not enough time into testing, texturing, lighting and effects. Over time I’ve learned to try and correct that.
With this project, I started making environments with visuals and compositions in mind. The feedback on the visuals has been great, but that came at the cost of gameplay and user flow. After designing a bunch of levels for aesthetics I started to design levels for gameplay. When I’m making a level that started in Maya I try to get it into Unity pretty quickly to test out scale and feel. This usually means importing a whole level as one fbx file with many different lower level models and putting a mesh collider on everything to test it out. Once I’m happy with the feel and scale, I’ll start to replace the models in the giant, fbx import with modular assets. Those modular assets can then be used later on to make more gameplay-centric levels in Unity. Throughout the whole process I’ll switch back and forth between Maya, Unity and Photoshop. If I run out of ideas in a concept sketch, they could be answered in 3D and vice versa.
When it comes to environment modeling I am usually thinking about composition. I’m also thinking about pathways for the player and making nice compositions both along those pathways as well was from a distance. The Principles of Design are also a help and along with wanting to make more vertical levels that helps me to play with scale and dramatically increase it.
Composition creation tips
First sketch completed for the game showing some pathways for the player and area ideas.
More advanced player pathway map later in development
When I was working on the levels I knew I wanted them to work with abilities that gave the player more height and I knew there would be a strong platforming element to the game. I really tried to avoid floating platforms, overuse of caves or anything that too closely resembled the ant farm kind of effect that you often see in 2D games where the rock is sliced in half and the player just sees a repeating pattern. I wanted the things that you would jump, swing, jetpack or glide on to make sense in the world. That led to some of the really tall trees, rocks and overhangs.
Swamp level in the game with tall trees, rocks and overhangs
In terms of the assets themselves, they are really modular to try and cut down on creation time. There are basically 6 trees in the game. I modeled them in Zbrush using Zspheres and with any modular object, I really tried to make it work aesthetically and as a gameplay platforming object. I also knew that the game would be for VR and so I kept the individual polycount of the objects as low as possible, as opposed to giving the objects a higher polycount and trying to get more shape out of each object, which helped me in that I could duplicate objects a bunch and fill more of the world.
Concept sketch for modular tree pieces, colored in Photoshop
Modular 3D trees modeled in Zbrush using Zspheres, optimized later in Maya
Working on skies
The clouds are a combination of 3D models and particle effects. There are only 3 3D meshes for the clouds. One of them was even repurposed to be a bush to keep things moving quickly and modular. The meshes were modeled in Zbrush using Zspheres again, which is a great tool for getting really organic shapes especially more spherical shapes. The meshes are marked static in Unity and receive lighting and shadows which I think helps the look a lot. They are also scaled up to a bunch of different sizes just like the trees, duplicated and placed strategically for composition and light pools.
3 meshes within Unity were used to create the clouds here
The particle effects come from Unity’s Standard Assets dust storm prefab. The effect is tweaked a bit to change the size and color and it is also duplicated a couple times. I also have a hand-painted skybox in the back with a shader on it that allows the skybox to not receive fog. Certain levels also have a volumetric fog on them in addition to Unity‘s built-in fog. I’m using an older version of David Miranda’s Fog Volume and I am using the Exponential Squared mode for Unity’s built-in fog.
Unity’s Dust Storm particle emitter duplicated and tweaked
Over the last year or so I’ve been slowly testing printing some of the images I’ve made. The static cloud pictures have been painted up some in Photoshop for printing purposes.
Photoshopped screenshots for printing
The animations really come from a bunch of different places. I’m using a number of Unity Asset Store assets. The backbone of everything is really Opsive’s Third Person Controller or TPC. TPC provides camera control settings and an ability system setup. There are default abilities as well as abilities that can be added. The setup connects to Unity’s Input Manager, but Rewired, another Unity Asset Store asset, is being used to manage inputs. When an input is registered an ability script is run and it calls on a state in Unity’s Animator which can be defined. There are a lot of states especially considering that each state has an upper and lower body layer, to control the upper or lower half of the character, and there are often substates and scenarios for different abilities, but everything is well organized so finding things and editing them is pretty easy once you learn it. The setup is great as well in that it doesn’t require a ton of transition links between states and there are options for tweaking the speed of transitions.
Unity’s Animator window showing the animation or ability states for the character
Ability settings in Unity
For the animations themselves, I’m using animations from Unity’s Standard Assets, third party plug-ins like Opsive’s Third Person Controller and Blueisland A’s Dynamic Sword Animset, which is amazing. I did some animations myself as well using Maya and some animations Ryan McMahon did. Ryan is a Drexel student who has been helping me with programming and setting things up and he’s been invaluable. I’ll usually test out a pose first to see how well it functions and looks in game. If a pose isn’t working then I don’t want to spend a bunch of time animating it, but if it is then it’s worth the time investment.
Pose I thought was interesting, but didn’t really work well in game.
Root Motion’s Final IK has also provided some awesome help in terms of adapting the character’s lower body to the curved surfaces of trees and other meshes in the game. Lastly, TPC’s camera settings and Ryan’s work to allow for all of our different camera states are also clutch. The player can do a lot with the camera, in and out of VR. They could pull the camera way out and get a nice, wide view of the environment or view everything in VR as if they are gigantic and playing with 3.5’’ GI Joes and a bunch of other options in between.
Animation pose in Maya
Root Motion’s Final IK adjusting that pose to meet a curved surface in Unity
Building Games in Unity
For the most part, Unity is amazing and I love it and prefer it to Unreal. The implementation is very clean and easy to understand, the community is great, the documentation is fantastic and the Asset Store is a great resource. In regards to the Asset Store, there is so much work and experience put into some assets that it sometimes wouldn’t make sense to make it yourself. I’m using a lot of plugins, some of which I mentioned before like Rewired, Third Person Controller, Dynamic Sword Animset, DunGen, Fog Volume 2, Final IK, and PuppetMaster. I’m using a ton more and I have plans to keep trying out new and different assets that I think can help. The Asset Store is not only a time and money saver for independent developers; it provides access to things that I probably would never have the ability to get in game before on my own. This is a link to my Asset Store Wishlist for those that would like to see what I have, what I like and what’s on my radar.
I do have to say that Unity can be quite frustrating at times because of its stability in recent years. I’ve learned to try and stay on a single build if it is functioning alright and updating, especially with all of the assets I am using, is something that needs to be planned out because some things are guaranteed to break. The lighting engine update from Beast to Enlighten, in particular, was a challenge and I think caused a few people to switch from Unity to Unreal. For me personally, lighting times have gone up significantly and they came with detriments like lighting errors, noise, seams and an overall lack of quality.
Robert Nally, 3D Artist
Make sure to follow Robert on Twitter.
Interview conducted by Kirill Tokarev.