@Tristan: I studied computergrafics for 5 years. I'm making 3D art now since about half a year fulltime, but I had some experience before that. Its hard to focus on one thing, it took me half a year to understand most of the vegetation creation pipelines. For speeding up your workflow maybe spend a bit time with the megascans library. Making 3D vegetation starts from going outside for photoscanns to profiling your assets. Start with one thing and master this. @Maxime: The difference between my technique and Z-passing on distant objects is quiet the same. (- the higher vertex count) I would start using this at about 10-15m+. In this inner radius you are using (mostly high) cascaded shadows, the less the shader complexety in this areas, the less the shader instructions. When I started this project, the polycount was a bit to high. Now I found the best balance between a "lowpoly" mesh and the less possible overdraw. The conclusion of this technique is easily using a slightly higher vertex count on the mesh for reducing the quad overdraw and shader complexity. In matters visual quality a "high poly" plant will allways look better than a blade of grass on a plane.
Is this not like gear VR or anything else
DeepMotion developers gave a little talk on creating dynamic environments for character simulations (Gravity, Ice, Wind, Water) using their technologies.
Lone Echo was released by Ready at Dawn and Oculus in 2017 to widespread acclaim as a new “high point” for VR. In addition to a rich story and expansive world, the experience was widely praised for masterful leveraging of a zero-gravity environment and “floating” character locomotion. By implementing an advanced IK system for the upper body tracking and allowing the lower body to passively follow, the game circumvented breaks in immersion that many feel when teleporting around a world, or when tethered to an Avatar with unnatural animation or strange IK artifacts in limb rotation.
While not all VR titles should take place in “Zero-G”, the Lone Echo team proved how environment-deterministic character motion can bring an experience to life, as well as provide useful constraints for developers. At DeepMotion, we believe character interaction will advance games, immersive experiences, and robotics; to this end, we focus on creating solutions for complex character simulation, natural full body avatars, and mechanical simulation. Part of this means bringing AI motion to games, but it also entails creating dynamic environments that work in concert with physics-based character simulation.
In most of today’s games, environment artists are required to work in parallel with tech and animation teams. They are discouraged from adding any arbitrary surfaces that require specific character animation and/or programming efforts. Environment artists and character artists typically need to converge on solutions to illustrate the effects of a slick surface, a body of water, a breezy corridor, or zero-g. A physics-based method has the potential to not only make architecting these environments simple but also to automatically trigger natural character motions in real-time simulation.
One of our engineers, Topu Reza, explains below how we created this demo for character simulation in changing (and parameterizable!) environments using our articulated physics engine and smart avatars. Using these methods, devs can add surfaces with distinct physical properties to the environment and our physically simulated characters (who are well aware of the surfaces they are on) will react dynamically. We hope that in providing tools for physically coherent worlds and characters, artists will be able to focus more on imaginative game design, apps, and experiences!
This demo was created in Unity for AR using the DeepMotion Avatar plugin and character authoring tool for physicalizing 3D rigs.
Some of the features shown in this blog are yet to be released. But, if you’re interested in being an early tester of physics-based character simulation, you can apply to join DeepMotion’s closed alpha here!
Gravity is a force of attraction that exists between any two masses—any two bodies or particles in the universe. For example, earth draws objects toward its center at the acceleration of 9.8 m/s2. On the moon the rate is 1.6 m/s2, and, in space, the rate is something close to zero.
Implementation of gravity usually resides at the core of any physics engine. DeepMotion’s articulated physics engine is no exception.
The value of gravity can be controlled by making a simple adjustment in the plugin editor. You can play with different values to see a range of effects on a DeepMotion character.
Slick materials such as ice can also be created by adjusting a few numbers. To generate this frozen environment we just needed to create a physics material with low friction values set.
One aspect that makes our simulation technology exciting is the character’s optimization for balance. In the same way that humans and animals hold posture to keep balanced (even in the face of unstable environments) DeepMotion characters try to stay upright when hindered by perturbations. This means the character maintains balance when forces such as a strong wind are applied to the environment!
Applying force to single or multiple rigidbodies is common in any physics tech, but alternatives fall short when trying to balance a set of rigidbodies and joints (i.e a character!) in the face of external forces. DeepMotion characters will balance naturally when physical forces take place.
Implementing wind in with DeepMotion physics is straightforward. For this demo, we just defined the direction, magnitude, and the wave pattern to simulate different kinds of wind—breeze, gusts, or squalls.
When thinking about the effects of underwater character physics, what comes to mind is the extra drag or resistance in a fluid environment. This resistance is the result of the motion of a body through a fluid, where the motion action is in opposition to the direction of the oncoming flow velocity.
Characters simulated using DeepMotion have a variety of properties and parameters for each joint. One such property called ‘drag’ can be used to simulate this resistance across every joint. Different fluids will have different levels of resistance (water, mud, molasses). For water, we found a few drag value sets that work pretty well. Heavier joints (higher mass), such as the torso areas, need drag value set to around 100000. We set the arms and hands within the 5000-10000 range. The lower body limbs, like the hips and legs, require ranges from 10000-20000. As a comparison, for atmospheric environments, the drag is at 0.
The advantage of having this dynamic drag feature is that we can load varying value sets into the character’s body joints depending on what environments they are in. When the character is on dry land, the values will stay at their default setting, when he enters the water, his joints will be impacted by these new drag values. Without this technology, an animation team would have to generate variations of the same canned animations for each environment, which is costly, inflexible, and ultimately less realistic. We’re hoping other developer teams can implement this solution to create more dynamic environments, interactions, and character motion, all while saving time and working from a unified system.