Destroying Ivy Tower With in Max and Maya
Subscribe:  iCal  |  Google Calendar
7, Mar — 1, Jun
York US   26, Mar — 29, Mar
Boston US   28, Mar — 1, Apr
Anaheim US   29, Mar — 1, Apr
RALEIGH US   30, Mar — 1, Apr
Latest comments

This is amazing! Please tell us, What programs where used to create these amazing animations?

I am continuing development on WorldKit as a solo endeavor now. Progress is a bit slower as I've had to take a more moderate approach to development hours. I took a short break following the failure of the commercial launch, and now I have started up again, but I've gone from 90 hour work weeks to around 40 or 50 hour work weeks. See my longer reply on the future of WorldKit here: I am hard at work with research and code, and am not quite ready to start the next fund-raising campaign to open-source, so I've been quiet for a while. I hope to have a video out on the new features in the next few weeks.

Someone please create open source world creator already in C/C++.

Destroying Ivy Tower With in Max and Maya
21 December, 2017

A little overview of some of the amazing stuff you can blow up in 3d with some great plugins FractureFX and PheonixFd.


My name is 昱緯 江(yu-wei Chiang) or Albert. I’m from Taiwan – the most beautiful country! I’m an Effect TD. I had been studying Computer Science before I came to Gnomon. I think it’s the long hours I spent in front of the monitor watching random stuff on YouTube that got me into the VXF world. I started out by teaching myself compositing. From modeling to compositing, I do a little bit of everything, but now effects is the part I enjoy the most.



For me, FractureFX and PheonixFd are similar in terms of iterating simulations. For example, in the early stage of this project, I only did fairly low-res simulations with both of them. In FractureFx, after my event network is set and everything is working perfectly, I just simply fracture the mesh into more pieces and everything works just the same. And the same rule applies to PheonixFd, after I got the shape, speed and motion I want, I just hit resume and I get the nice high-resolution fluid sim.

FractureFX really is an awesome tool, because it’s extremely customizable. I can set up secondary breakups and generate particles for fluids with speed, mass or a region that I defined.


For me, there are two major mistakes that I often make. Firstly, it is to get the simulations to look right. I sometimes start the simulations without looking at references. As a result, the simulations are more than somehow wrong and I also just waste lots of time.

Secondly, it is spending way too much time on the small stuff and not completing the shot. Sometimes I’m so caught up on the small stuff that people might never see in the final comp. I spend hours just to get the small stuff right, but never start focusing on the hero effect.

Working efficiently

In order to work efficiently, I always break the shot into small steps as well as set up priorities. I didn’t even think about how I am going to approach the lightning looking beam in the center before I finished the ground fracturing. Basically I don’t want to lose focus as well as stress myself out.

I almost never jump straight to my shots. Instead, I usually start with a smaller scale. For example, before I did the ground shattering, I played around with the setting with merely one simple sphere. I made sure I understand the tool first and how to set up events for secondary breakups, interior details, getting UVs in and out, and particles for the following fluids sim…etc. Because I didn’t want to run into any major problems when there are like thousands of pieces in the scene. Then, I blasted through the whole shot with fairly low-res simulation and did a fast comp, so I could get a good picture of what the final result might look like. After that, I’m pretty much ready to go, and because of all the preparation I did, everything will be a lot easier from now.

Ivy shot

For the ivy shot, the core nodes I would say are the Solver and IsoOffset.

The Ivy starts with a point, the solver node allows me to advect the point forward and keeps the old points in place every frame. And basically you get yourself a growth pattern.

In order to make the points stay on the surface of the mesh, I convert the mesh into a SDF (signed distance field) – like a hollow shell. Every frame, if the point leaves or penetrates the surface of the mesh, I slowly bend the point back to the surface.


First thing first, get my tower in.

Since the whole project is based on the growth simulation, I have to start on that.

First of all, I need to turn the tower into a shell for it to grow on. I wrap the tower with a sphere using ray sop. And instead of moving and scaling the sphere into position myself, I just have matchsize sop to do it for me.

I simply use iosOffset, making the whole geo in to a SDF, and I’m ready to go.

This is the setup for the Ivy without branches. Basically, solver is going to run the setup I have every single frame. It all starts out with one “active” point. I adjust the active point’s normal with a noise pattern and advect it forward along that direction – (Change_N, advect). At the same time, I leave the exact same point behind at where it was, changing its status to “passive” – (attributewrangle_passive). In the end, I simply merge two points together and output them – (merge2). And repeat this process over and over after the starting frame. The only difference is that I must delete the “passive” points – (blast_passive).

This is the simple example of how I managed to have the points stay on the surface.

Volumesample(red) tells me whether the points are inside or outside of the mesh.

Volumegradient (yellow) gives me the vector info between the points and the mesh.

After combining information(blue), what I have is a vector always point to the surface of the mesh, which I can use to bend the points back to the surface.

Now I can start fracturing the tower. The following image illustrates the points that I scattered for fracturing. Since I’m going to fracture along the vine, I need more points around it. I simply use PointsFromVolume sop to generate bunch points form the vine Geo. The reason of using pointsFromVolume instead of isoOffset is that I find it hard to do an isoOffset on thinner objects.

After fracturing, I have to make sure the pieces fly to the direction that I want (which is a point outward, but not too uniform).

I start out just with a noise pattern on the velocity. Then I blend it with a simple sphere with all its velocity pointing outward.

This is the result.

Then I use attributeTransfer to transfer the velocity back onto the tower. For the velocity to work, the point number and order must be the same.

In my setup, everything is inactive at the beginning. In the sopsolver under the Dop network, I brought in the vine as a group bounding object. While the vine is growing, it adds new pieces to the group (green highlighted).

Then I update the pieces in the group to “active” and add the velocity that I predefined (red).

Finally, after caching out the simulation, I use transformPieces sop to transfer the pieces with UVs back on to the sim.

I know this is kind of a cliché but always look at your reference. It doesn’t necessarily have to be from the real life. It can also be from big movies and games. And it is very important to do so, because it will save you a lot of time, like A LOT!!

Physically correct is one of the most important things of all. Basically, that’s the key to telling your audience what’s the scale of your story. For example, if I have my fractured pieces falling too fast, it’s going to feel like it’s a small tower, which would feel a lot less dramatic.

For me, I do effects thinking like a compositor. Before I render anything, I always think of what could be done in comp, also how and what passes I should set up for comp that can save me time. For example, lights interacting volumetric render is the most painful one. I always try utilizing simple Maya default point light and spot light with RGB color to render my fluids. And simply shuffle them out and grade them into any color, intensity and direction I want. Another example is that I render GI lighting form the fire or explosion only with simple lambert shader on my object (right) to save some render time. I use the diffuse pass on my Geo pass to fake the shading back in (left). POWER OF COMPOSITING!!!

 I didn’t figure all these out all by myself. I must thank all my instructors from Gnomon. Especially Wayne – my Demo reel teacher, Peter – my Houdini 3 teacher and Berkstein – my Dynamic 4 teacher. It is because of their help and the mind set they put in me that mad me able to complete all these shots.

Albert Chiang, a VFX Artist

Interview conducted by Kirill Tokarev

Follow on FacebookTwitter and Instagram


Leave a Reply

1 Comment threads
0 Thread replies
Most reacted comment
Hottest comment thread
1 Comment authors
bereasonable Recent comment authors

Taiwan is not a country, but a province of China