logo80lv
Articlesclick_arrow
Research
Talentsclick_arrow
Events
Workshops
Aboutclick_arrow
profile_loginLogIn

Creating Facial Expressions in Wrap 3 for Games

Darya Chainikova from RainStyle prepared an illustrative step-by-step instruction on how to create face emotions with face scanning technology.

Darya Chainikova from RainStyle prepared an illustrative step-by-step instruction on how to create facial expressions with face scanning technology and Wrap 3.3.

 

Introduction

Hello! My name is Darya Chainikova, I’m a senior 3D artist at RainStyle production studio. We are currently engaged in developing a Sci-Fi horror called On Air.  The game tells about the mysterious events that happened to the main character, who by chance stayed at the American hotel ALGOL. This incident will completely change his life and force him to face uncertainty.

In our project, we try to make the characters as realistic as possible, convey their emotions and personality. That’s why it was decided to use the face scanning technology to create them. In the framework of the project, we will need two levels of the characters’ detalization: a render for cinematics and a low-poly that will be used in the game itself.

The game is being developed in Unreal Engine 4. We’re planning to write several articles explaining in detail the stages of the character creation. In this article, I will describe the process of creating the main character’s face. Let’s begin?

Face Scanning

 The image below shows some of the emotions that we received after processing the face scanning results:

They are mostly exaggerated and do not look realistic. This method of creating a face is based on the Blend Shape feature. It means that each of these emotions will be used not on its own, but in conjunction with the others. 

Check out this video by Jason Baskin reviewing the feature in Maya:

First, we need to figure out what to do with the scanned head and how we can use it. Obviously, the topology of the scan does not allow to apply it as a model, but we can get the basic features of the face, wrinkles, scars and, of course, texture. Program Wrap 3.3 will help with projecting our future head model on the scans. It is widely used for processing scanned models.

Basemesh

First, we need to create a basic mesh. That is a neutral expression of the face, which will be the basis of all emotions. Our scan with neutral emotion looks like this:

Wrap 3.3 has a node system where all functions are represented as nodes connected to each other by a chain. The node is selected by pressing Tab. A scan with a neutral emotion and its corresponding texture are loaded with the help of LoadGeom and LoadImage nod. For it, the path should be specified in the File Name panel. In the LoadGeom node settings, you may select different geometry display modes (wireframe is disabled in the image below). You can also change the geometry color (does not work if the texture is mounted), coordinates, and scale.

Next, you need to load the basic mesh, which will be projected on the scan. You can find it on the “Gallery” Tab, where many ready-made emotions are displayed. I opt for Basemesh in this case.

Basemesh and the scans should be more or less matched. Use coordinate axes to do this.

I use the wrapping node to project the base mesh on the scan. When you hover the mouse over nodes, tooltips will be shown. The first node (Floating geometry) is designed for projected geometry, the second node (Fixed geometry) – for a fixed object on which the geometry is projected (in this case, it is a scan with a neutral emotion).

To run the calculation, you need to mark the points on both models, which will correspond to each other. To do this, use the Select points node, which connects to the corresponding model nodes. By selecting this node, you should go to the Visual Editor tab. Enabled Sync function allows to synchronize views of the camera for the left and right panels.

Selecting Polygons

If the projected model contains polygons that must stay in place, you need to select them by using the Select Polygons function. Otherwise, artifacts related to the mesh inward details may appear.

Use the Visual Editor tab to apply the Select Polygons node. Here you can select polygons that are divided into subgroups by the models from the gallery. It simplifies the task: they can be selected or deselected on the top panel.

I’m highlighting the inner parts of the throat, nostrils, eyelids and the lower part of the neck:

After that, the model will be projected correctly:

For better detalization, you can also use the Subdivide node. It connects to the base mesh and all nodes associated with it connect to its Output.

Texture Baking

Further, it is necessary to bake the texture from the scan to the base mesh. The Transfer Texture node is used for this purpose. The Source geometry node connects to the model from which the texture will be baked. In our case, it is a scan. The Target geometry node connects to the base mesh – not to the LoadGeom node this time, but to the Wrapping node, as we will use the modified, projected geometry.

In the settings of this node, we can select the quality of the baked texture. After that, we should add the Extrapolate Image node to fill the transparent areas, and save the generated texture by using Save Image nodes and pressing the Compute current frame button. The geometry is saved similarly with Save Geom nodes.

Working on Different Emotions

Once we’ve got a neutral emotion and texture for it, we can move on to getting geometry for other emotions. Load the necessary scan model and Basemesh with textures.

Now align Basemesh with the scan. In the beginning, you can map the models approximately using coordinate axes.

Rigid Alignment is a node that allows you to align the models with each other more accurately. The Floating Geometry node connects the geometry you want to align. In Fixed Geometry you will find the Static Geometry node (a scan in our case).

When using Rigid Alignment nodes, you need to use the Select Points node:

My model has control markers that make it easy to select the points. To align the models with each other, the points should be put on those parts of the face which slightly change in case of a particular emotion. In the case of this emotion, they are the forehead, top of nose, neck, etc. Also, it is advisable to place points on all sides of the model (that is, for example, not only on the face but also on the back of the head and ears).

As a result, we obtain models aligned with each other:

If you need to scale the model, use the Match scale function in the settings of the Rigid Alignment.

Combining Basemesh & Scans

The next step will be projecting the base mesh to the scan. To do this, we will use the node Optical Flow Wrapping (optical flow). It allows you to align the geometry with the scan texture. At the same time, the texture of the emotion will completely coincide with the texture of the base mesh except for folds, wrinkles and other changes. All uneven skin areas and pores remain in place when using this method.

Before using the Optical Flow the Polygons node is used:

Thus, the fixed parts of the head will remain on their places.

You can also use the Select Points node. Here, you select points on the mobile parts of the face:

Next, we move to the settings of the Optical Flow. In the Visual Editor window, the cameras that can be adjusted will appear around the model.

At the top, there is a menu from which you can move to the camera view. There are 13 cameras by default. It is necessary to arrange them properly in order to capture the whole face.

I leave almost all default settings unchanged except for the options of Resolution final and Optical flow smootheness final:

Finally, you can click Compute and see the result:

After the mesh is projected on the scan evenly, you need to bake the texture. This is done in the same way explained in the first part of the article:

As a result, if you combine the generated texture with the texture of the base mesh, some details will be added, but the control points will be the same:

After that, the texture and the geometry can be saved.

 

As a result, we got a new emotion mesh with the texture.

Blend Shapes

The Blend shapes method allows blending emotions. To do this, load the generated meshes into the Blend shapes node.

In the first node, I use the neutral emotion. Experimenting with the node settings, you can choose how much each emotion will be expressed. To make this method work, the models should have the similar topology.

Thank you for your attention! The topic will be continued in the next article.

Darya Chainikova, 3D Artist at RainStyle

Join discussion

Comments 1

  • Saara William

    I don’t even know how I ended up here, but I thought this post was good. I don’t know who you are but definitely you're going to a famous blogger.

    0

    Saara William

    ·5 years ago·

You might also like

We need your consent

We use cookies on this website to make your browsing experience better. By using the site you agree to our use of cookies.Learn more