Is there any way i can tweak the colors dynamically through another blueprint? I tried with the "get all actors of class" function and setting the colors of the clouds by a timeline, everything else connected to the timeline does its thing but the volumetric clouds wouldn't change. Are the properties somehow fix?
Hi, what version of blender does this work with?
Yeah this is good but it doenst capture the 2d look it still looks 3d. How about copying the movement of 2d animation because this looks way too smooth. 1 example is using the classic by twos which most studios do or also use 24 fps to really capture the 2d feel
Alexander Richter allowed us to repost his recent study of generating realistic digital actors using dynamic microstructure skin.
Our skin is the biggest and most important organ of our body and is there for defense, hydration, protection from UV radiation and heat regulation. It is also significant for the communication and the first thing we see when we look at each other. Especially the face is the source of our emotional expressions. We are programmed to focus on it first to identify the mood of our opponents while we noticing the slightest changes in the deformation of the skin translating it into information. Like tree rings the skin tells us about someone’s age, health, and emotional state.
The skin plays also an essential role in the believably of digital characters. Digital actors are getting more and more important in entertainment, education and simulations. In the last year the research mainly focused on the representation of the skin and its different layers (aka Subsurface Scattering) while the deformation mostly was stuck on the macro level. That resulted in believable character from afar.
The scientific publications „Skin Microstructure Deformation with Displacement Map Convolution“ creates a method to synthesize fine skin deformations on the micro layer (< 0,1 mm). The results present the anisotropic surface differences in the specular highlights even from the distance which are a valid alternative to the frame to frame actor scanning option. The core thought of the paper is that the micro deformation of the skin shows stretching as blurring and compressing as sharpening of the texture map. For that we need
- a static micro displacement map which highlights the fine pores,
- a tension texture map which shows the deformation and changes of the skin
- and an application which combines both texture maps adding empiric data of the paper to create a dynamic mircostructure displacement map.
The static micro displacement map is a detailed grey scale map which can be piped into the displacement, specular or bump channel. The detailed bumps create small shadows and break up the reflections of the oily parts of the skin. To create the ultimate version of such a map you usually scan a face which results in a high-resolution texture map. This is the best way of action to give digital actors of real and existing characters more details. This is not an option on passed away characters since every microstructure is individual and can’t be moved from one to another. In such cases you can download scanned (tileable) texture maps and use them procedural on the face. This option saves a lot of time and the creation of an UV-map but loses a lot of individual details.
The project “Einstein” used a 16k texture map for the face. It was created with stamps of downloaded scans while trying to recreate the details using reference photos of Albert Einstein. This process is a compromise between the scanned version (which wasn’t possible) and a generic result.
Our skin structure can change dramatically in dependence of stretching or compressing. By using polarized light it is possible to measure the specular reflectance during stretching which can go up to 75% of the volume on specific regions. The reason for the immense volume of the skin lies in its stiffness which creates elasticity through microstructure reserves and allows it to react to changes without ripping. This consistency differences are especially clear on the elastic forehead and around the eyes while the area at the tip of the nose has less reserves and structure. We can use a balloon to show the effect: Is the balloon inflated and tensed up his surface is smooth and reflective. Does the balloon loses its volume his surface becomes rough, porous and light absorbent. This effect is comparable with the reaction of our skin just multiple times of stretching and compression with anisotropic effect.
As a result, our skin has 3 states: stretched, rested or compressed.
Most of the time the stretching in one direction correlates with a compression in the opposite direction to preserve volume. This means that both states coexist at the same place and are orthogonal to each other. If we assume that the neutral model is our rest state it means that every animation is a stretching or compression of the 3D model. To calculate the Tension Map we take the margin of the rest_pose and the animation_pose. To use the same input as in the publication we created our own tension node in Maya (Tension Plus) which generates the needed information using the rest and animation mesh. With the help of the value decomposition (SVD) we get the values for stretching, compression and its direction and convert the 3D world space into the 2D tangent space. The resulting data is written into the vertex color of the animated mesh and saved as a tension map for every frame. We tried external plugins like SOuP to create tension maps but it ended up averaging its results which made the data worthless for our purpose.
After we collected the static rest state and the animation deformation (tension) we now can use the empiric data from the paper to combine both into a dynamic map using our own written application. At the beginning our script loads the micro displacement map and the tension map per frame into the buffer. The scalar product of the static micro displacement map shows us the strength of the deformation on a specific pixel compared to the rest state. The red channel of the tension map shows us the strength of the stretching and compression while the green and blue channel defines the U and V direction.
We transcribe the red channel into ratio:
ratio = 0.7 + 0.6 * ratio / 255.0 r > 1 = stretching; r < 1 = compression
[img: Adding stretching uni directional]
Using the lookup kernel we can apply the right Gaussian Function. We start by accounting the original static scalar value with the new stretching and blur them (Gaussian kernel). Now we move one pixel into the stretching direction, do the same calculation and do an average of this new value with the one of the original pixel. It is important that the overall value decreases with the distance. We repeat this process until we overstep the threshold where the new pixel wouldn’t have an impact on the original value in a significant way. After we reached the limit we repeat the process into the other direction. Arriving at this threshold we successfully calculated the stretching for this pixel. Now it is time to process the compression which we do by turning the direction at 180 degree and start the same calculations as we did before on the stretching. It is important that we use the results of the stretching for the compression which also means that you can’t do this actions in parallel.
After everything is done we write down the value for the original pixel and move the pointer to the next until we have calculated the influences of the deformation to every pixel in the static micro displacement map. In the end the array of values is saved as a dynamic micro displacement texture map. Now the resulting image sequence can be used in the shader as a bump, reflection or displacement texture map.
(img: static adds small bumps while dynamic controls them and adds lines)
Even in the beginning of this project is was clear that the scripting language Python would be a bottleneck for the performance and the time to calculate the texture. The reason is that Python is interpreting and not compiling the code which takes longer especially if the size of the texture is 16k which means a lot of calculations need to happen.
As a result a static texture with 1k with just one direction could take more than 6 minutes to create a dynamic map. Lets not think about the time to go through a 16k texture with hundreds of direction changes. A solution needed to found which cuts the processing time significantly. C++ has a reputation of being fast but also frighteningly hard to learn and to master but since the code was already written in Python converting it to C++ seem like a worthy investment. Another benefit of using C++ was the lightness of multi threading with the OpenMP library which allows to process the different calculations per pixel in parallel. Compared to the heavy handed Python you can do the splitting in C++ with just one line of code. One line that puts down the creation time of a 1k texture map from 55 seconds to 0.5 second while most of this time is lost on loading the texture into the buffer.
#pragma omp parallel for schedule (dynamic, 4) private (strength, gauss_width, k_s, tmp_distort, posX, posY, L, tension_color)
The recreation of realistic skin did get a more important role in 3D in the last years with the success of the digital actors while the realistic appearance lies in the detail and the digital behavior of the skin. Especially the face plays an essential part in the expression of emotions. By adding the micro structure details the character not only gets more finesse but also a fine-pored appearance which follows the different expressions and enhances them. The limitations of this process start with the distance in which the details of the micro structure decreases but still can appear as reflection changes.
Since this process is synthesized the realism is impacted by the used data and the individual distribution and can in doubt create a more realistic but average appearance. The dynamic micro structure allows a more realistic closeup of the face and combined with the established algorithm it allows to interact with a digital Me and ensures an easier immersion into the digital world. VR – we are coming!
[img: 1 – static, 2 – dynamic.]
[img: The study with 93 participants shows that the dynamic skin is more believable]
*Thanks to Johannes Schurer (PhD) for the support and to the Research Department of the Animationinstitut (Filmakademie Baden-Wuerttemberg) and their current “Einstein” project which allowed to create these realistic renderings.