Simon Tremblay talked about the production of the massive environment, spanning over 65 square kilometres.
We’ve had a chance to talk with Simon Tremblay about his most recent Unreal Engine 4 scene. The peculiar thing about this project is the scope. Inspired by the Kite demo, Simon managed to build an incredibly big world filled with beautiful floral elements. It’s all made with the clever usage of World Machine, Geoglyph and Substance Designer. Here’s how he did it.
My name is Simon Tremblay Gauthier. I have been creating 3d art on and off as a hobbyist for about 10 years. I was trained as a generalist studying at Herzing college and Centre NAD in Montreal. I find all aspects of game development interesting but I started specializing mostly in textures, shaders, lighting and FXs. I only recently had my first professional experience as I had the chance to work on the open world survival game “Rokh” as a technical artist for a few months. I am now looking for new challenges and opportunities
Open World Project
Ever since Epic released their amazing Kite demo at GDC 2015, I’ve been fascinated by the idea of creating such huge seamless open world. Working on “Rokh” was a great opportunity to learn more about the limitations and challenges of working on such a huge map in Unreal Engine 4. With my newly acquired knowledge, I set out to build my own open world demo as a portfolio piece. I am now about 4 weeks of work into the project. There is still a lot of things I want to add/change but I am rather pleased with the results so far. I have often strived for realism when creating art but I recently started paying more attention to games opting for a more stylized approach such as Firewatch, Journey, Overwatch, No man’s sky, The Legend of Zelda: Breath of the Wild etc. I also realized early on that attempting to create such a huge world in a realistic style all by myself would be an almost insurmountable challenge. I figured going for a slightly more stylized approach would be more feasible yet still had the potential to yield impressive result. I took a lot of cues from Firewatch’s awesome presentation at GDC 2016. Even though Firewatch was built in Unity, a lot of the concepts and ideas apply directly to a scene made in Unreal. I realized early on that I could get away with a very limited amount of individual assets if I could nail the lighting, atmosphere and overall “feel” of the scene. Here are a few images from my inspiration folder:
I ended up with a terrain of 65km2 split into 16 levels. I used Unreal engine’s world composition and level streaming to stitch everything together. I went through a few iterations of the terrain, I started out simply sculpting it using the Unreal Engine terrain tools but I wasn’t satisfied with the results. I decided I would instead use World Machine to generate the terrain’s height map. I experimented and created a few decent terrains but I still wasn’t entirely satisfied. That’s when I started using the plugin called Geoglyph. It’s basically a collection of macros, generators and filters. This expand the list of already great procedural tools World Machine has to offer and allows for rapid creation of very believable landscape features. I found the “Neoflow” and “Reflow” macros particularly useful. Although great, all these tools come at a cost. The final tiled build took over 8 hours to render. If I were to do it again I would probably try and use less “expensive” generators and macros as it slowed my workflow down quite a bit and didn’t allow me to do as many tests as I would have liked. Since World machine’s preview only displays height maps as 4096 x 4096px maximum while the final tiled build exported to Unreal engine was going to be almost double that resolution, the result inside the engine were somewhat unpredictable. Here is a screenshot of the final World Machine graph Here is the World Machine preview with flat colors representing each material. Unreal engine 4 is somewhat odd when it comes to terrain’s size, resolution and number of components. I won’t get into too much technical details but I ended up exporting a 65.092km2 terrain as 16 (4×4) height maps of 2017 x 2017px each for a total of 8068 x 8068px. When importing terrain height maps in Unreal Engine 4, unless you resize the terrain, 1 pixel = 1 meter giving me a terrain of 8.068km x 8.068km or… 65.092km2 matching my World machine terrain exactly. I found this terrain resolution was sufficient for my needs, especially since I planned on using tessellation in the material. My maximum elevation in World machine was set to 2048m but Unreal terrains have a 512m elevation by default so I had to scale the terrain by 400% in the Z axis on import. More information about Unreal engine’s terrain specifications can be found here. I also exported 5 different masks in the same fashion (at 2048 x 2048px instead of 2017 x 2017px). These masks were later used as “blend weights” to determine where each textures would be applied. Exporting the terrain as a tiled landscape instead of a single texture has a few advantages.
- It allows to import terrains with a total resolution higher than 8192 x 8192px (The maximum texture size in Unreal engine)
- It allows to easily setup level streaming.
Importing a tiled landscape in the engine will automatically separate each tile as a different level and place each of them in the correct position relative to each other in the world composition window. It is then possible to load and unload individual levels from memory both in the editor and at runtime. What isn’t currently being rendered on screen doesn’t need to be calculated by the engine granting massive performance improvement. This can be controlled either via a simple distance setting or can be setup manually using trigger volumes + blueprints. Multiple “persistent level” can also be created. I ended up with all actors that would affect the entire world inside their own separate “persistent level”: Lights, fog, post process volume, sky etc. I then duplicated this level and modified many settings to create a night time scene. I could then switch between the two lighting scenarios by simply loading the corresponding level. This also allows multiple people to work on different parts of the game at the same time instead of taking turns “checking out” the level from source control. The same level streaming technique was used in Firewatch, albeit in Unity, as explained in their GDC presentation.
I also went through a few iterations when it came to the terrain’s material. The first version I created was a bit of a mess. It was one giant material where the placement of each texture was determined entirely inside the material editor using slope angles and height detection. The results were somewhat acceptable and it had the advantage of automatically updating if I decided to modify the terrain geometry. The problem was, it quickly became very hard to work with and the placement of the textures felt really “static” and unnatural. After some research, I ended up creating a separate material function for each of the “layers” and then blended them together in a “main graph” using the blend weights exported from World Machine. This allowed me to make use of World Machine’s great masking options as a base while still allowing me to paint blend weight using the regular Unreal terrain tools. This setup also allows to paint in additional layers over the initial 5 if needed. It also helped keeping everything organized. Each layer could be edited or swapped easily without having to deal with a single massive overly confusing graph. Here is one of the layers’ material function’s graph: Here is the main material’s graph so far:
A common problem with terrains is the textures’ tiling can be very noticeable. To alleviate this issue, I ended up modifying the texture coordinates of the textures based on camera distance. Textures are tiled more or less depending on the distance from the camera allowing for bigger shapes with less apparent tiling in the distance while still maintaining appropriate texel density and details when viewed up close. The transition is smooth and barely noticeable once setup correctly. Tiling and distance parameters are exposed for each textures and can be tweaked inside the material instance. I also wanted to make use of tessellation and displacement but I didn’t want to subdivide the entire terrain for obvious reasons. I ended using the same camera distance trick to control the range at which tessellation is applied. This allowed for highly detailed terrain in a small radius near the camera without my computer catching on fire. Quick note regarding tessellation and terrains: By default, the tessellation multipliers of the “Flat tessellation” and “PN Triangles” shaders are capped to a certain value depending on which version of the engine you are using (Capped to 15 in version 4.10+ and capped to 8 on older versions.) Unreal terrains have a resolution of 1 pixel of the height map = 1meter = 1 polygon. Even when using the max tessellation multiplier of 8 or 15, you might not get quite enough resolution to properly displace the mesh using a detailed height map. The usual solution would be to subdivide the mesh before importing it in the engine but that’s not really possible with terrains. You can instead edit the shaders themselves to unlock the multiplier and get a higher number of subdivisions as discussed in this thread. Although be aware that this apparently might cause issues on some AMD graphic cards. I personally own an Nvidia card and haven’t noticed any problem beside the expected performance drop caused by rendering a large number of polygons.
Most of the textures were created procedurally using Substance Designer. Various parameters were exposed allowing to generate and tweak multiple texture variations inside the Unreal Engine directly. This makes tweaking textures a lot easier since It is possible to see how the textures will render in game instantly as I make changes instead of having to export and import between each modification. Substance Designer graph:
List of exposed parameters accessible inside Unreal Engine:
Variation examples, all generated from the same Substance graph:
The grass and flowers were baked in Substance Designer unto simple billboard planes from highpoly geometry modelled in 3ds max. The billboards were then arranged in various clumps back in 3ds max. Since these clumps were all composed of “billboards planes” they needed to have their vertexs normals edited so they would react to lights properly, this is especially important for grass and fronds billboards. I used a 3ds max script called “Normal thief” to transfer the vertex normals of a dome matching the shape of each clump. These edited clumps were then brought into Speedtree before being exported to Unreal simply to make use of the wind and color variation features of Speedtree. The trees are temporary placeholders. I took a Speedtree example tree, lowered the poly count of each LOD for better performance and created a few variations before exporting them to Unreal. I then had to set the “cull distance” for each tree as well as the “screen size” of each LOD manually to ensure proper transitions and performance. Those trees will eventually be replaced as I am working on creating my own from scratch but it was a very quick and efficient way to get the scene up and running. 3ds max screengrab of the high poly flowers ready to be baked:
Since the grass was going to be omnipresent in the scene, I went through many iterations trying to get it just right. I played with the scale, thickness and density of the grass blades/clumps for quite a long time until I was happy with the in game results.
Notice the light direction and how the billboards are not shaded properly before editing the vertex normals. The difference might seem subtle but once you start instancing the clumps hundreds of times around the level, the incorrect shading quickly becomes apparent.
I used Unreal’s “grass” and “procedural foliage spawner” tools to populate the landscape. To setup the grass, I created a “Landscape Grass Type” actor and specified a few grass clumps static meshs to be used. Then, in the landscape material, I sampled the grass material layer and plugged it into a “grass” node. In the node properties, I set “Landscape Grass Type” created earlier as the “grass type”. This allows the blend weight of the grass layer to determine the placement and influence the density of the grass. The density, cull distance, scale and rotation of the grass clumps can then be set in the “Landscape Grass Type” actor. The trees and flowers were a bit trickier. First I created a “foliage type” actor for each asset and a “procedural foliage spawner” actor referencing each of them. Then 16 “Procedural foliage volumes” (one per level) we’re created and positioned to cover the entire landscape. A lot of time was then spent setting parameters in each “foliage type”. Many options are available to determine how the assets will be spread around. I am still tweaking these settings but so far I’ve found that using a very low “initial seed density” combined with a higher number of “steps” and “seed per step” will yield a more natural looking placement of the assets. A lot of trial and error is required here to obtain satisfying results. More information about the procedural foliage spawner tool can be found here.
Lighting is something I feel is often overlooked yet it’s probably the most crucial aspect of any 3d scene. I often see aspiring artists post images with amazing potential that are absolutely killed by mediocre lighting. It simply doesn’t matter how good the modeling and texturing is if the lighting is bad. It can really make or break any scene. I didn’t have an exact concept art target when I started but here is what I had in mind when it came to lighting: I wasn’t aiming for realism, I was more concerned about the overall “feel” of the scene. Taking cues from the inspiration images I had gathered, I knew wanted very bright and saturated colors with a lot of contrast. I wanted the world to be inviting, somewhere you would instantly want to explore. Early on, I decided to go with fully dynamic lighting for a few reasons.
- I wanted to eventually implement a day/night cycle which simply isn’t possible with static lighting.
- I wanted to avoid dealing with light maps and long build times if possible.
- I was curious about “light propagation volume” global illumination and wanted to try using it.
Unreal Engine 4 doesn’t currently have any “production ready” dynamic global illumination solution, at least not publicly available. “Distance field” and “Light propagation volume” GI are the closest thing to that but neither are officially supported nor have they been updated in a while. Regardless of that, I read LPV GI was pretty stable for outdoor scenes and decided to give it a shot. A few files and parameters need to be edited to “unlock” the feature as it is deemed “experimental”, more information can be found here. I was rather pleased with the results. The bounce light and color bleeding it creates can yield impressive results. Many parameters can be tweaked simply by using a post process volume granting a lot of control over the effect. Fog is another aspect of any outdoor scenes that I feel is rarely discussed and often overlooked yet it can make a huge difference. I ended up using Unreal’s “atmospheric fog” and “exponential height fog” combined to get the effect I was after. I wanted thick distance fog to clearly separate the foreground elements from the background and give a sense of scale to the scene. I also wanted to avoid the “default unreal white fog look”. I found that by default, once you start adding quite a bit of fog, colors quickly become “washed out” and it’s hard to maintain any kind of saturation in the colors. To alleviate this issue, I used rather saturated color settings for the directional light, skylight and atmospheric fog. I also boosted the contrast and saturation of the entire scene with a post process volume. This allowed me to retain the bright, saturated colors I was after while also separating the foreground and background clearly. Thick fog also allowed me to hide the fact that most of the foliage and trees are culled in the distance for performance reasons. Post process volumes are also a great way to tweak and enhance the lighting. I spent a lot of time playing with the various options available. Lighting only: Detail lighting: Final day time lighting: The night scene is just something I put together rather quickly for fun. I really liked the “layered” look of the key art for the game Firewatch and tried to do something similar. It is a still a work in progress. I intend to eventually implement an actual dynamic day/night cycle.
One of the main concerns when building such huge world is obviously performance and optimization. Since this is only a personal project and not meant to be commercially released, I probably will not spend the time to fully optimize the project but it is still a major concern. Here are a few things I think are important to keep in mind:
- Correctly setup LODs and culling distances on everything is crucial.
- Vegetation is particularly costly, try to keep everything as low poly as possible. Consider turning off dynamic shadows on some of the static meshs. Also try to avoid “empty transparent space” on billboards planes as much as possible as this can cause a lot of overdraw.
- Separating the world in multiple levels and using level streaming is a must. It allows for a better workflow and performance.
- Avoid long distance sight lines as much as possible, this will allow you to unload huge chunks of the world from memory and geometry culled in the distances won’t be as apparent. This is something I didn’t do and wish I did. If I were to do it again, I would make sure to have more clearly defined zones with blocked lines of sights,
- Calculating dynamic shadows across a vast full resolution terrain is very costly. Consider creating a low poly proxy mesh of the terrain used to cast the shadows instead. (Something I didn’t do yet but I am considering it)
- Don’t get bogged down on details. When working on something so big, you have to accept you cannot paint every little parts of the terrain by hand or place every little asset manually like you would in a regular scene. You have to relinquish some control over to procedural tools and it can be hard to “let go” at first. The overall “feel” of the scene is what is most important.
It was a pleasure to share a little bit of my process and workflow. You can follow the development of the project on my Artstation where I will be posting updates.