logo80lv
Articlesclick_arrow
Research
Talentsclick_arrow
Events
Workshops
Aboutclick_arrow
profile_loginLogIn

How to Build a Look For the Game

Ramon Schauer showed how photogrammetry and clever materials helped him to build an amazing scene from a non-existing game.

Ramon Schauer showed how photogrammetry and clever materials helped him to build an amazing scene from a non-existing game.

Introduction

Hello everyone!

My name is Ramon Schauer. I am a student currently studying Animation & Game at Darmstadt University of Applied Sciences in Germany which I will graduate from in a couple of weeks.

My main focus is on 3D environment art and lighting, however with my last couple of projects, I became more interested in character art as well.

During my studies, I had the chance to work at Deck13 Interactive on “The Surge” as both a 3d environment art intern and a junior environment artist, which was an awesome experience.

As this project was my bachelor project I knew that I only had a limited time frame of a little less than 3.5 months which is why I had to choose my scope right from the beginning (look back at it now it was still way too much).

Knowing that creating a full game on the level of quality I wanted was not feasible, I still wanted to work on a project which has more context than just an individual environment or a selection of props I decided to go for an intro cinematic/teaser trailer for a non-existing game.

This also was a great excuse to get more insight into the cinematic capabilities of Unreal Engine 4 as I am very interested in real-time rendering and how it will affect traditional film workflows.

Two awesome sound students joined me to help out with sound and music, which really got the project to a higher level of quality. Originally I wanted to focus mainly on the environment, but as time went by the character became more and more important for what I wanted to convey.

As I am not very good at writing I knew from the beginning that I did not want to tell a full story, but rather tease and hint at individual story elements which suggest a larger context of a full game and get the viewer interested in finding out more about this.

Overall, my approach to the project was based heavily on the advantages of real-time rendering. Everything was blocked out in Unreal at the beginning, where the sequencer would allow me to fly through my scene and “find” the right shots directly using the final camera view. Using this approach I created several animatics which I iterated on until I achieved the final result by slowly replacing all the block out elements with final assets.

Inspiration & References 

As for the setting I wanted to go for a fairly realistic medieval setting, similar to The Witcher or Game of Thrones, which overall were my two main references. Other Projects I looked into for inspiration were Hellblade, Kingdom Come: Deliverance, Uncharted 4 and Hunt: Showdown.

The setting should look similar to something like the Scottish highlands with the main focus on nature and only a few man-made looking assets, indicating that the story takes place in an area which is sometimes frequented by humans, but still outside of a village or larger settlement.

In terms of atmosphere, I wanted to go for the contrast between yellow/orange vegetation and an overcast, slightly blue sky and some light snow.

I actually tried to create an environment with a similar mood 3 years ago at the beginning of my studies, but felt that I could not properly capture the atmosphere I imagined – so overall this also was a nice little test for myself to see if I have improved in this area over the last years.

For the overall mood and colors, the work of Piotr Jabłoński was a huge inspiration, especially this one here:

The composition was supposed to have its main focus on the crooked tree with the hanging body and appear similar to a linear, well readable ingame level which makes it instantly clear for the player where to go.

Since I wanted to implement Photogrammetry into this project, the final composition was also influenced by the availability of assets to scan.

Environment production

Since the goal of the project was to only create an intro cinematic and the timeframe was very limited, the approach to environment production was a little different than for a usual game environment. At first, a blockout for the final scene was created along with a basic lighting setup. This blockout was then fleshed out in order to create the final ingame shot by slowly adding assets, working from large structures like the rocks and trees to smaller elements while constantly adjusting the lighting.

In parallel to this, the main scene was duplicated for every shot and adjusted based on what is visible on camera, creating a unique Unreal scene for every shot.

This approach gave me the possibility to individually adjust the lighting for each shot which was necessary since unfortunately skylights do not support lighting channels right now and some shots such as the closeup of the face required a totally different brightness of the skylight in order to look good.

Since I was working with Photogrammetry, I was paying close attention to the placement of the assets, especially in order to hide transitions and make assets blend together.

Capturing process

One of my personal goals for this project was to figure out an efficient pipeline for working with photogrammetry and see how much I could realise with this technique.

While I tested several different kinds of cameras, ultimately the majority of the assets was captured using a regular smartphone (Samsung Galaxy S5) as the results were good enough and it was much more flexible and more often available than a full DSLR camera.

For reconstruction I ultimately used RealityCapture. Agisoft Photoscan was tested as well and both gave good results, however I felt that the meshes created from RealityCapture were slightly more detailed and, most importantly, the processing was way faster than in Agisoft Photoscan. While the commercial license of RealityCapture is ridiculously expensive, there is a steam version available which costs 30€/month and was well suited for this project.

Since processing still took a lot of time and I could not work on my PC in the meantime, I set up my older desktop pc with remote control through AnyDesk and use it for processing.

Using AnyDesk allowed me to remote control the PC even from a smartphone, which made it possible to constantly process assets, check the progress and restart if necessary, even when I was working in the university.

For further processing of the scans I tried to work out a pipeline which would minimize the time needed to bring a scan into the engine as a final asset. For this, ZBrush played a major role.

At first the final scan was brought into ZBrush for clean-up, which usually meant masking out and deleting the unwanted areas. In order to quickly get rid of the small flying pieces of geometry, which sometimes develop in the scan, I used the autogroups and mask by polygroup features in ZBRush.

Once this was done, the cleaned scan was decimated to the desired polycount using Zremesher in order to quickly get a lowpoly mesh.

Unfortunately, Zremesher often loses the sharpness of edges and results in a blobby silhouette. In order to counter this, the remeshed mesh was then subdivided again and the original scan was reprojected onto the subdivided lowpoly. Using this approach, the initial shape of the scan carries through to the lowest subdivision of the lowpoly and helps achieving a better silhouette.

The reason why zremesher was used instead of decimation master was that I wanted to make use of displacement and parallax occlusion mapping if needed, both of which work way better if the mesh is in quads only. UVMaster was used for the uvs in most cases as it deals very well with organic shapes.

Baking was mostly done in Marmoset Toolbag – a common set of baked maps usually consists of the initial base color of the scan, a tangent space and an object space normal map, a height map and an ambient occlusion map.

For delighting of the albedo texture I ended up using a very simple yet effective workflow using Photoshop in combination with the baked maps.

First, I inverted the baked ambient occlusion map and used it as a mask for a levels adjustment layer in order to quickly get rid of the majority of the indirect shadow.

After that, the individual red, green and blue channels of the object space normal map were used once again as mask for levels adjustment layers.

Since every channel corresponds to a lighting direction, this allows to get rid of directional shadows.

The final step (which was not always necessary) was to create a cavity map and use it in the same way in order to get rid of very small cavities, resulting in a good, delit albedo map.

The roughness was created by inverting and desaturating the albedo texture and adjusting the overall brightness to match a value referenced from one of the available PBR texture charts.

In order to save memory, all grayscale textures have been channel packed, containing the roughness, height and AO map in one texture.

1 of 2

For this project, I also produced several tileable ground textures.

While the main approach for this was similar, the final scan was baked on a plane instead and the resulting textures taking into Substance Designer, were the “Make it tile”-node was used in order to create a seamless version of the texture.

In order to create the trees, a base trunk was scanned and a tileable version of the bark created.

The scanned trunk was then used in combination with branches created using Speedtree, which gave me the possibility to quickly produce variations of the branches.

Unfortunately, some assets could not be created using photogrammetry, for example the large tree whose roots had to exactly fit onto the rock.

In order to still get some value out of the previous scans, a set of alphas for sculpting in ZBrush was created from these, making it easier to sculpt assets while still making them match the scanned ones as good as possible.

Using photogrammetry overall requires more planning and a structured breakdown of the scene to be built beforehand, as it is very tempting to scan all cool looking objects available, even though they are not needed. Instead, it is better to focus on assets which are not too unique looking in order to have reusable assets such as the rocks, which worked well to create further cliff variations with few scans.

Also, scanning patches of ground instead of individual small assets turned out to be working very well and saved a lot of time.

Photogrammetry does not work well with very small or thin objects, such as foliage.

When I was researching this topic, I stumbled across a technique called photometric stereo capture which was a perfect fit for the project as it produced good results and is fast to set up. Photometric stereo is a technique which is able to reconstruct a base color, normal and height map from an object photographed from the same position under different lighting conditions.

This means only a set of 4-8 images needs to be captured in order to get a good, scanned result.

For this, it is important to have a light which has a strong direction (eg. a flashlight) and an otherwise completly dark room.

Furthermore, the position of the camera and object has to be exactly the same for every image, while only the light gets rotated roughly 45° for every shot.

In order to be able to scan translucency as well, I built a simple (and extremely improvised) scanbox, which allows me to place a strong light below the object, resulting in an easy translucency map.

For processing this, I used a tool called Dabarti Capture.

Dabarti Capture is a very simple tool, which has a few good extra features such as the use of a light probe (in my case a simple lightbulb) or the ability to output a true, scanned heightmap.

However, Substance Designer is able to process photometric stereo as well through a node called “Multi-Angle to normal/albedo” but does not provide a height map.

While this technique worked well for the project, there are still a couple of artifacts visible in the normal map due to my improvised capture setup.

I definetly plan to experiment more with this technique in the future.

Сharacter production

The character was one of the aspects I probably spent the most time with as I wanted to do my best to make her look as natural as possible. While I knew from the beginning that I would not be able to achieve full photorealism, I am still very happy with the final result.

In order to save as much time as possible, I used an already unwrapped basemesh created using MakeHuman, which was a great base for further sculpting and detailing.

The armor was created using a combination of Marvelous Designer, Maya and ZBrush.

For parts like the padded clothing I made use of the noisemaker in ZBrush in combination with a tileable heightmap I created in Substance Designer to quickly create a base for the padded cloth.

Everything else in terms of the clothing was just regular modeling and sculpting.

The hair cards were all placed by hand and the texture for the hair strips was created using xGen and rendered out in Arnold as a base for further work.

Due to time constraints, I used a slightly more unusual approach for the skin pores.

Instead of manually sculpting them using alphas or projecting texturingxyz displacements, I decided to leave them out in the highpoly sculpt.

Instead, I made use of the resources provided with the wikihuman/digital emily project, which provides a scanned model including displacement and albedo textures.

I used WrapX in order to wrap my unwrapped lowpoly onto the digital emily model.

Since the meshes now shared the same positions, I could simply rebake the textures from the digital emily model onto my new uv layout, which gave me an excellent base for texturing, which was further adjusted in Photoshop.

Probably the most time was spent tweaking the shaders.

For this I used both the Photorealistic Character Content Example as well as the shader examples created by Sungwoo Lee as reference.

For this skin I decided to mimic real skin by working in several layers.

The first layer consists the baked normal map combined with the skin pores transferred from the emily model.

On top of that, a tileable detail normal map is added to provide further detailing in closeups.

As a final layer, I added a small, tileable micro normal texture, which essentially is just a noise run through nDo. While this is barely visible, it helps breaking up the specular highlights and adds very subtle variation.

I also added the possibility to adjust dirt and blood on the face by using the layered material system in combination with a mask created in Quixel Suite.

Rigging the face was probably the hardest part for myself as I never created a face rig before.

For the body I made use of the auto-rigging provided by Mixamo.

The facerig is based on the book “Stop Staring” by Jason Osipa and uses 41 blendshapes which have been sculpted in ZBrush.

Other features, such as the automatic following of the eyelids were realized using joints, once again following the techniques outlined in stop staring.

To further increase the details, I created a system for wrinkle maps in Unreal.

In order to achieve the map, I sculpted two combinations of expressions – one for compressed and one for stretched shapes.

These were then baked to a normal and AO map.

In order to blend these accordingly, a mask was created which separates the mesh into individual zones – the forehead, mouth and nose area in my case.

These were then split into a left and right side and the intensity of the morph target connected to a parameter which drives the blending intensity of the according wrinkle map.

The hanging body was created in a similar way, but was much faster and simpler as a lot of the materials were already available, no face was visible and the whole character did not need any kind of rig or animation.

Final shot

In the final shot, the whole environment was supposed to get unveiled to the viewer.

Most of the assets were created through photogrammetry using the workflow described above, however, some individual assets such as the main tree or the fence were created manually in Maya/Zbrush and textured using Quixel Suite and Substance Designer.

As I was working with scanned assets, blending them together to get a cohesive look was important, especially for assets captured in different locations.

This was done mostly through two things.

The first one was color correction to make all albedo textures match each other and the second one the use of moss to blend on top of the assets.

For this, I created a master material which was used for all assets and contains a list of features. Aside from setting up all regular textures I created a switch to turn on/off detail texturing and displacement.

To get a bit more detail, the master shader calculates a simple specular map from the AO map by itself and then a value of 0.5.

The most complex part is the ability to automatically blend moss and snow on each asset.

Initially, the moss is layered based on the up vector of the object, always staying on top of the asset, no matter how it is rotated. To be more flexible, it is possible to further paint moss onto the asset through vertex colors.

The moss itself was quickly created in Substance Designer by blending together a bunch of different noises and uses the FuzzyShading node in order to achieve a more “fluffy” look.

The snow works exactly the same as the moss and can be added on top of the moss layer.

Lighting 

Lighting is probably my favorite step of every project and involved a lot of experimentation.

As the mood I was going for was fairly overcast and even, it was hard to create a light setup which does not look flat and boring.

The core of the setup was an HDRI of an overcast sky taken from NoEmotion HDRs in combination with a stationary skylight on a medium intensity.

A directional sunlight was added on a low intensity to add some visual interest to the scene.

In previous projects I have always struggled with finding a good way to work with HDRIs in Unreal as they usually get downscaled by default and do not offer much control.

To work around these problems, I imported the HDRI as a .TGA with the compression setting set to “VectorDisplacementMap” as these (atleast from my understanding) do not get scaled down during the import. I then used a node called DeriveHDRfromLDR in a shader, which converts the imported image into a kind of fake HDRI but allows control over aspects such as the overall intensity and sky exposure.

To fake some bounce light, I made use of the Lower Hemisphere Color setting in the skylight settings. When this is set to a color similar to the ground (in my case the orange of the grass) it is a great way to fake some cheap bounce light.

Since the reflections on the metal parts were still fairly boring, I used additional spotlights in a seperate lighting channel to bring out more reflections.

These were adjusted for every shot.

In order to blend the environment into the sky and create more depth, I made use of the volumetric fog on a low intensity.

To enhance smaller details, a simple sharpen post process material was used in the scene.

The final step to bring out more contrast and create a more interesting color scheme is the use of a Look-up table (LUT). For this, I took a screenshot of my scene and brought it into Photoshop to do some color grading.

The grading can then be exported as a LUT and used in the post process volume.

Adding interface elements 

The interface was quickly put together in Photoshop and then added in After Effects later.

Since the goal was only to produce a small trailer/intro, it was not necessary to set up a fully functional UI in engine.

However, adding pieces of UI such as the skip button or the ingame HUD helped a lot in making the project appear like a full game.

Adapting quality for the real-time game

Obviously, the specs used for this project are not suitable for real-time.

I do believe that it would just be a matter of spending more time on optimization in order to make it run in real-time.

Most meshes are slightly more high poly than usual game assets, but not to the point where they would not be usable.

The main point of optimization would be on the texture side as almost all assets use unique 4k texture. A lot could be optimized here by simply packing multiple textures together, introduce more tileable textures and decrease the texture size without losing much detail.

Currently, the scene runs with around 20 fps on a GTX 860M graphics card without any kind of optimization, so it would definitely be feasible to create a game of this quality, given enough time.

Making a game

Currently, I am not planning to create a playable game out of this.

Overall the aim of this project for me was to get deeper into the use of photogrammetry and practice my character art skills, so aside from the setting, there are no ideas for gameplay or a deeper story available right now.

While it obviously would be tempting to see a full game coming out of this, I do not think it would be feasible for me to create a full game at this quality as a single person and to commit to a single project for such a long time.

All in all this project has been by far the most challenging one I have done so far.

On one hand, I was experimenting with a lot of new workflows and techniques, but the main problem was the scope which was just way too large for 3.5 months.

As I was too stubborn to reduce the overall scope, this basically ended up in a 3 month crunch. However, I am fairly happy with the final result and while it was challenging at times, I learned a lot of new things, especially considering photogrammetry and photometric stereo, which I will definetly get deeper into in future projects.

In case you are interested in more breakdown material, you can take a look at my Instagram, where I documented most of the progress.

And in case you want to see more of my work, feel free to take a look at my Artstation.

Interview conducted by Kirill Tokarev.

Join discussion

Comments 2

  • Vaanian

    This looks amazing, you may not want to continue with this but i honestly think you should. Your work flow has been streamlined and as you build your assets when your not home on multiple pc's over time you Will have a large variety to choose from. I wouldn't focus on a crunch and decide that it has to be done in a rush but if you just keep slowly adding to it you could eventually have a real solid product.
    Thanks for the great article

    0

    Vaanian

    ·6 years ago·
  • Gogo

    A really great article, I learned a lot from it. Especially about photogrammetry which I didn't know about before, so I definitely want to try it out in my own projects later. Thanks for writing this up :)

    0

    Gogo

    ·6 years ago·

You might also like

We need your consent

We use cookies on this website to make your browsing experience better. By using the site you agree to our use of cookies.Learn more