Hurry up and get it on the store - I need it for a project ;-)
All characters look so artificial...all movements of characters are unnatural :( CG industry has traveled by so many years of 3D development in all possible ways and all characters in animations looks still like puppets:(
Thank you for your post, I look for such article along time, today I find it finally. this post gives me lots of advise it is very useful for me. https://tanktroubleonline.com/
Pavel Ksenofontov, an independent UE-developer, told us a story about how he used neural networks in the Unreal Engine in order to recognize user’s steps in the Virtual Reality.
How the Idea Came
When I tried VR for the first time, one of the most vivid impressions for me was the absence of the legs there. You as well may also remember that strange feeling. Later, having tried various VR projects, I realized that inability to track the legs position in the VR led to the fact that, opposite to the classic non-VR games, the user’s movement there was completely silent.
After realizing this fact, I wondered if it was possible to somehow recognize that very moment based on the HMD device’s movement when the user takes a step. I had some time to think about it and realized that I needed to collect data on speed and acceleration in space first and then carry out a detailed analysis of that data in order to understand how to create the right algorithm for the step recognition. But of course, there’s the other side of the coin – I am very, very lazy, so I didn’t want to analyze all that data. I already thought of renouncing that idea, but…
At that time, I always came across articles on the use of neural networks for solving various problems. From what I understood, neural networks work especially well with categorization tasks. So I thought – hey, recognizing the moment of taking a step is just a task!
First Investigations & Data Gathering
First of all, I decided to find out whether this idea will work at all. Having run a search on the Internet, I came across the program called MemBrain – you can build neural networks, train them and examine their work there. The program turned out to be quite easy to use – after reading the program manual I was able to get started.
Right of the bat, I asked myself a question: where do I get the data for training the neural network? The solution was simple. I used the logging capabilities of the Unreal Engine so that a string with linear and angular velocities and accelerations of the HMD device in space is logged when I click one of the Motion Controller buttons. At the same time, an additional parameter was displayed at the end of the string: it determines whether the step was actually made (depending on the Motion Controller button that was pressed).
How did I collect the data? I ran the Unreal Engine in VR and performed different actions – I twisted my head, leaned over, ducked (without steps), or did the same, but with steps added to those actions. Along with them, I was also pressing one of the buttons on the Motion Controller so that all the data was recorded in the log.
Consequently, I gathered a lot of data, which I imported into MemBrain after the simplest processing in Excel. The input data of the neural network should be normalized to 1. Initially, I took this condition into account when processing data in Excel: I selected a multiplying factor for each parameter that would guarantee it would be close to 1 at the very maximum value of each parameter. In other words, I found the maximum value for each parameter and divided all its values by that maximum value. Later, I added the appropriate coefficients to the Blueprint script so that I would no longer do this manually.
Building a Neural Network
The next step was to build a neural network that would receive information about the HMD devices movement, and provide only 2 possible values at the output – 0 (if there was no step) or 1 (if there was a step). I had a thought at that moment that if the experiment was successful, I would transfer this neural network to the Unreal Engine in the form of a Blueprint script. So I decided to make the neural network as simple as possible. Since I am not an expert in neural networks, I decided to use the one with the simplest structure, which is most often described in the pictures on the Internet.
So, I created a simple neural network in the MemBrain application, uploaded a set of lessons into it and, with bated breath, I clicked the //start// button to get the neural network learning.
I do not know what exactly I was expecting. Perhaps I expected a huge window to pop up saying “What a fool you are! Close this program immediately and never run it again!”. Yet, nothing happened. Instead, I saw that the neural network started learning. A few minutes passed (after all, the neural network was very simple) and I realized that my idea worked! The neural network can really recognize whether the user has taken a step or not!
Duplicating the Network in UE
I was so encouraged by the success that I immediately tried to duplicate the MemBrain derived neural network as a Blueprint script in the Unreal Engine. I was transferring the learning data (these strange characters digits in the parameters of individual neurons and links between them) to the Unreal Engine by simply copying them to the clipboard.
Once the network was duplicated to the Unreal Engine, I pinned the sound of steps to the “Step” event. As the chart in MemBrain stated, the neural network didn’t provide the exact values of 0 and 1, it only provided the values approximate to them. I have taken this feature into account in the appropriate place of the Blueprint script.
So, I assembled the Unreal Engine project, launched it in VR and… I realized that nothing was working. The neural network produced some data at the output, but it was just noise.
It turned out that I did not understand the basic mathematics of neural networks well. I had to read a little more about how the neural networks are arranged and conduct a few tests on the simplest neurons. In the course of these tests, I was sending the same data to the input of the neural networks in both MemBrain and Unreal Engine. Then I compared the signal at the output. By doing this, I was able to accurately reproduce the neuron math I needed in the Unreal Engine.
In the end, after several hours of experiments, I compiled a new neural network in the Unreal Engine. I assembled a VR project, launched it… and realized that the neural network was working. I was walking – and heard my steps in the virtual reality! That moment was unforgettable!
Of course, when the euphoria had passed, I realized that the neural network made mistakes too often: it did not recognize some of my steps, but sometimes detected my head movements as steps.
Improving the Network
The next few days saw lots of experiments: I varied the number of layers of the neural network, changed its structure, added more and more data for its learning. I walked, twisted my head back and forth (simultaneously clicking the required buttons of the Motion Controller), trained the neural network, transferred the network data to the Unreal Engine (copy, paste, copy, paste – uh!). I tested the network, determined the movements where the network was making mistakes, typed the training data for those movements, supplemented the training set and trained it again. It was a really painstaking and long process. However, every time the neural network worked better and better.
It is difficult to tell the number of times I retrained the neural network and how many times I made changes to its structure. At some point, however, I realized that it practically stopped making mistakes (as long as I moved naturally and did not try to “deceive” it intentionally). The component was ready. Later, when my colleagues tested its performance, I realized that people liked the way it worked.
After adding a component to the Unreal Engine Marketplace, one of the customers contacted me and pointed out that the neural network was constantly making mistakes if the user’s head was making a particular motion. The most interesting part was that this motion was practically the only one which wasn’t recognized by the neural network. I wanted to solve this problem, but I was very afraid to spoil the already finished project. And I thought – hey, what if I added a special (very, very simple!) neural network that would only recognize those cases in which the basic neural network made mistakes, and make adjustments to the result?
At first, the idea seemed strange to me. But the more I thought, the more I liked it. Just a quick reminder – I am a lazy person and do not like to think a lot and do extra work!
I reworked the Blueprint in the Unreal Engine so that the helmet movement parameters could be recorded to the log after an event. In other words, I could test the main neural network, and if I noticed that the basic neural network had made a mistake, I would press the controller button, so the parameters that witnessed the error appeared in the log. By making a simple additional neural network, training it and making it work side-by-side with the first one, I was able to achieve a significant reduction in the frequency of the component’s false alarms.
As you can see, the neural network is easy and fun (even if you have never done it professionally), especially when it’s combined with the Unreal Engine. I hope that this article has made your creativity flowing and inspired you to experiment with neural networks. Just remember that a similar tool could appear in your toolbox.