Then stop scrolling. My Florida Green’s doctor can help get you registered and lead you towards your medicine. Enjoy this break from social media with a newly legalized joint. Contact My Florida Green today to ‘get legal’ with your medical marijuana card in Sarasota.
Thats really cool talk :)
Wow it's so refreshing to see projects inspired by serious cinema and even more literature, most 3D artist I know probably never heard of Tarkovsky and wouldn't go through an Art film that is "foreign", 2.5 hours and has really slow shots. It's a shame, there's so much inspiration out there waiting to be taken from all the brilliant XX century Masters of cinema... Keep up the good work, I really hope to see more stuff from you.
We were fortunate enough to talk to Raymond Yun Fei, a third year PhD student specialized in physics simulation, about his work «A Multi-Scale Model for Simulating Liquid-Hair Interactions». It is a novel multi-component simulation framework that treats many of the key physical mechanisms governing the dynamics of wet hair. To put it simply, it is a breakthrough model that can simulate wet hairs realistically. How can it be used and what are its limitations?
I am a third year PhD student specialized in physics simulation. Previously I complete my undergraduate study in Tsinghua University, and got my M. Sc. degree in Columbia University. I began research in computer graphics when I was a sophomore. Attracted by game industry I started my first project in real-time rendering and GPU acceleration. Before the wet hair project I worked on projects including fast voxelization on GPU, real-time rendering, image stylization, high quality offline rendering, interactive sound synthesis and 3D-printed metallophone.
In this project, we have Henrique Maia, another PhD student in our lab focusing on hair collision; Christopher Batty, assistant professor in University of Waterloo, who is a professional in fluid simulation; Changxi Zheng, assistant professor in Columbia University, my co-adviser and an expert in physics simulation and numerical analysis; and Eitan Grinspun, associate professor in Columbia University, my adviser and an expert in physics simulation and geometry processing.
A Multi-Scale Model for Simulating Liquid-Hair Interactions
I started this project as my first project during my PhD study. The general idea comes from Prof. Eitan Grinspun when he met with some technical directors in VFX and animation studios such as Weta Digital, Disney, and Pixar. For sometime these studios wanted a workflow that can simulate wet hairs realistically for their new movies but they didn’t know how to do it with the existing methods, and so they found us and see if there were any hope to do this. I was interested in the project though I have no idea about how to do it in the beginning.
We began with investigation of different models where researchers in physics and chemistry engineering implemented for very simple cases such as liquid flowing on infinite long cylinders, and solid particles dragging the fluids. But since the goal is unclear (and what’s worse is that we didn’t realize our goal was unclear) and there are too many literatures each of which focus on a different specific case, we didn’t find anything unified that is working for the simulation of wet-hair in practical scale.
The project was stuck after 6 months of building fancy models so I decided to take a detour, and return to the wet hair project 9 month after.
Left: a very early version of SPH brute-force simulation of the liquid-hair interaction; Right: a very early version of brute-force simulation of the cross-section of six hairs being sticked together by liquid. Red circles are the core of hair strands and blue region indicates the liquid. We optimize the positions of the blue and black dots to minimize surface energy.
I restarted by modeling everything in the simplest, most naive, most costly and most brute-force way as first trial (such as using smoothed particle hydrodynamics to model the liquid between hairs which has width of only several micrometers, or explicitly model the liquid-air interface with dense triangle mesh and performing huge non-linear optimizations to minimize the surface energy). We never expect we would use this model to succeed but just to learn the goals that we should focus on. With support from experiments we found several pieces that may construct a wet-hair effect that is promising, which include liquid flowing along hairs, dripping/capturing, and cohesion between hairs. We noticed that in previous work people assume the wet hairs as sponge and simulate slow non-inertial flow inside hairs, which contradicts with our experiment on pulling hairs out of water, where we shall see liquid flows down rapidly in seconds, and may suffer from drastic acceleration when people do hair flip. The previous methods have no way to simulate such effects.
the major effects we try to simulate in liquid-hair interaction: flow along hairs, liquid dripping, and hair cohesion. The actor in this photo is Raymond Yun Fei, the first author.
Based on these observations and some literature, we decide to focus only on our picked effects, and built liquid (that may suffer from strong accelerations and surface tension) on hair surface, which is different with all of the previous work. We started there, and progressively reached all the other goals with more simplified models, and finally our framework came out.
We used two different representations for the liquid: 1D height field on hairs, and liquid particles for bulk simulation, and hence our method is called as “multi-scale”, where the 1D height field is used for modeling liquid around the hairs, and the liquid particles are used for simulation in regular scale. Four major components construct our framework: a liquid height field on hair surface; a scheme for liquid capturing/dripping to convert between the form of height field and particles; a hair cohesion model; and a two-way coupling method to conserve momentum during the transfer between the liquid height field and the particles.
The four major components in our system, ordered from left to right: a liquid-on-hair model, a liquid capturing model, a liquid dripping model, and a hair cohesion model
The interaction between hair and liquid particles includes two parts: velocity computation and surface tracking. For velocity computation we treat the liquid height field on hairs as infinite number of liquid particles, and combine the velocity with regular liquid particles by performing integration along hairs. The surface tracking is handled by our capturing/dripping method.
left: particle representations of the hair flip example, red particles are newly dripped from hairs while blue ones are old; right: the rendered result with liquid surface reconstructed
Our framework can be easily integrated with popular fluid simulation frameworks people usually used. In our pipeline, we first solve hair dynamics, simulate the liquid height field, transfer the height field velocity onto grid, combine it with the velocity from the bulk liquid simulator, and do liquid capturing and dripping. Finally the fluid pressure gradient is applied onto hairs in the next time step.
Performance of the resolution of hair collision. For 8K hairs our method is 120x faster than diagonalized preconditioned conjugate gradient solver initialized with local solves, 827x faster than regular conjugate gradient solver, and 14,383x faster than a direct LDLT solver
Performance analysis: more than 70% computational cost is on hair dynamics, especially the detection and resolution of collisions
There are two types of parameters that can be tuned: physical parameters such as surface tension coefficient, liquid viscosity, elastic modulus of hairs, etc, which comes from real world physics; and artistic parameters such as the multipliers applied on cohesive/collision effect. For a moderate number of hairs our method can be interactive. In our test we found about 70 percent computational time is spent on the resolution of hair collision, where our framework shares the bottleneck with the state-of-the-art frameworks for hair simulation. We currently implemented our method on a full hair model, parallelize for multi-core CPUs, mainly targeting at the high-quality offline simulation for VFX industry. In the future, it’s possible to generalize our method to reduced/guide-hair settings (which is the most popular hair simulation scheme currently used in video game industry), and parallelize on GPU to reach real-time performance.
How close is it to real life?
We are still distant from a world where the simulation has no difference from real life. We indeed had to sacrifice some effects to complete the simulation within the time budget given the computational power we have in this age. For example, we modeled the liquid height field as a cylinder around the centerline of each strand, but in real life the water on hairs form liquid bridge while the hairs approach each other, which may have different dynamics than our cylindrical approximation. Another example is handling collisions between hairs, which is the Holy grail problem in hair simulation. Limited by the computational power we have, we used a spring model for the collision handling: it is a simple model but we cannot guarantee that no penetration happens between hairs.
The major difficulty for simulating wet hair effects lies on the fact that the liquid around hairs can be very thin (moving in microscopic scale), but with millions of hairs these thin liquid films determine the overall appearance and dynamics of the wet hair. As the consequence of this challenge, we proposed a multi-scale model for liquid simulation.
A hairy cylinder with 32K strands mimicking the mammal shaking behavior
For example, there may be other visually significant hair-liquid effects that we have not captured in our model yet.
Besides, we have not validated the accuracy of all of the components with respect to real world physical experiment, to reproduce every phenomena on liquid-hair interaction in real life.
Furthermore, our two-way coupling scheme has the limitation of time step. Because we used an implicit-explicit scheme for the coupling, the simulation can be unstable when a large time step used. Such choice is actually a balance between performance and numerical robustness, since a fully implicit coupling scheme can be very costly because of the large number of collisions between hairs.
It’s also difficult to build the liquid surface robustly since the height field of liquid around hairs can be very thin compared with the bulk liquid, which challenged the current popular algorithms of surface reconstruction for liquid (we use Houdini for surface reconstruction and rendering, which is a commercial software widely adopt in the VFX industry).
Finally as mentioned our method need to be adapted before applying to the reduced or guide-hair settings. Hence for real-time applications with large number of hairs, especially video games, there’s still some unknowns to explore.
As mentioned, although we have achieved great performance improvement for the hair-hair collision, our system is currently still target for high-quality offline simulations in the VFX industry instead of real-time applications. It is also useful for video games to simulate liquid-hair effects in their CG movies or trailers. We also try to provide some thoughts for future work that can simulate massive liquid-hair interactions in real-time. The simulation of high-quality liquid or hairs is still expensive, which is even rarely seen in AAA games. Hence there’s still a long way to go for researchers and engineers in video game industry to recreate these physics in the real-life.
Currently there are several major VFX and animation studios that have been interested in our technique and plan to integrate it into their simulation engine for their future blockbuster films. One of them has invited me to help them for implementation. It would be interesting to add more parameters to make it more controllable for artists if there’s such need.
The field of computer graphics has been developed for almost half a century. Nowadays people watch films that have awesome visual effects and play games with plausible renderings. However we still can feel the difference between real life and these films or games. Those difference mostly exist on details. Taking a slower pace, taking a closer look, human beings can easily find those small difference and realize the world in games or movies is artificial. My personal feeling is that recreating those details with better efficiency will be a trend of future computer graphics research: although we have done well so far in the ordinary scale, going deeply into the microscopic details can be complicated and need new ideas.
A closer look of the hair flip example shown above, where a lot of details are resolved
Our simulation of wet hairs is only a tiny part among those preliminary steps people took in this direction. There’re still lots of ideas can come out from, for example, the intricate patterned fibers making your towel, the exudation of sweat from your skin pores, and the sap in the muscle fibers of your steak. To re-create the dynamics of these details efficiently is the future task for researchers and engineers. The next revolution of the industry would be creating a world where people can enjoy it as their real life, or more than their real life, where our mind will be digitized, and physically liberated from our physical body. We will go beyond virtual reality and the word “virtual” and “real” will lose their meanings by the elimination of their difference.