Love your stuff! thanks for the info. You achieve surprising graphics using Unity which is great news.
is that images related to coc generals 2? zero hour ?
@Tristan: I studied computergrafics for 5 years. I'm making 3D art now since about half a year fulltime, but I had some experience before that. Its hard to focus on one thing, it took me half a year to understand most of the vegetation creation pipelines. For speeding up your workflow maybe spend a bit time with the megascans library. Making 3D vegetation starts from going outside for photoscanns to profiling your assets. Start with one thing and master this. @Maxime: The difference between my technique and Z-passing on distant objects is quiet the same. (- the higher vertex count) I would start using this at about 10-15m+. In this inner radius you are using (mostly high) cascaded shadows, the less the shader complexety in this areas, the less the shader instructions. When I started this project, the polycount was a bit to high. Now I found the best balance between a "lowpoly" mesh and the less possible overdraw. The conclusion of this technique is easily using a slightly higher vertex count on the mesh for reducing the quad overdraw and shader complexity. In matters visual quality a "high poly" plant will allways look better than a blade of grass on a plane.
We’ve talked with Michał Piątek about the way he created his entry to The Great Transmutator VFX contest.
Initially, I did not want to make VFX at all. I always wanted to become a film director. When I was 15 or so I started doing some silly short movies. My idea was that I should make a lot of them to get some experience and increase my chances of passing exams for a film school. I started with comedy genre and I was slowly drifting into action genre with each movie I made. Then I discovered four things roughly at the same time: Freddie Wong and Corridor Digital channels on Youtube, “Escape from City 17” fan movie and Andrew Kramer’s videocopilot.net . I was shocked by what these people were capable of. Corridor guys and Freddie Wong were doing exactly what I wanted – short actions movies packed with low budget but high quality VFX. Andrew Kramer made me realize that great effects can be made by a single person and I can learn that for free. And then Escape from City 17 – it was ten times better than anything Freddie or Corridor could do and it was still made by just two people. And they spent zero dollars on it. I was blown away. I thought that if I want to impress people, if I want to make it into a film school, I need great VFX in my movies. Few years later I finished my first music video. It looks very corny and cringe these days but I put all knowledge I had to make it. I tried to make as much VFX as I possibly could. This was very eye opening and I remember I was enjoying creating special effects more than anything else. After finishing high school I tried my strengths at film directory department in one of the Polish film schools. I failed so next year I tried at animation and VFX department in different school and I failed again. This made me think that maybe this is too hard at this point in my life and I should try something else. I found a job offer in Warsaw in a new company called CreativeForge Games. They were looking for VFX Artist with any amount of experience. I thought that maybe if I can do effect for movies so maybe I could do the same for games. I sent an application, they liked what I had in my portfolio and so my gamedev journey began.
I think the general idea of creating effects is the same. Both in films and in games what matters at the end of the day is a great looking effect which is both artistically and technologically groundbreaking. And what can be done in both medias is limited to technology available and time constrains. But there are a lot of differences too. Effects in games are not restricted to budget so much. A lot of effects in movies are practical and each practical effect is a major expense which needs to be considered. If you don’t own you render farm you need to rent it which costs money. You need to buy very high-spec machines to do simulations for you. This is all a lot of money, Whereas in games there is almost nothing stopping us from creating the effect you want except the time itself. Tools are so cheap these days I don’t think this is any concern anymore. You rarely have to simulate anything and most of these things can be done on a single, high-end home computer.
But the biggest difference in my opinion lies in workflow and the role of effects. In film an effect does not have to be functional. You don’t have to make it match specific gameplay requirements. It does not have to reflect any design ideas or does not have to be prepared with some code inputs in mind. In most cases it needs to match artistic needs, not functional ones. In games this is what VFX artist very often have to do. They not only make effects look good.They also have to incorporate certain design elements in them. There can be a gun with different gameplay states such as: charging up, ready to fire, overheated, out of ammo etc. All these things, they need to be communicated to a player and very often this is VFX artist’s responsibility. Think of aura effects in games such as MOBA or hack & slash. They need to tell you visually if they are offensive, defensive, passive, active, are they healing you, damaging you etc. Amount of health points your spaceship have left can be communicated by the intensity of damage effects playing. If it’s a flying fireball you know you are about to die. In movies you just do an effect which looks great and feels right in this particular scene. It still needs to communicate emotions or some ideas but this does not have to by systemic. But then again, all animation principles which can be applied to film VFX can be also applied to realtime VFX. Anticipation, readable shapes and motion, squish&squash, all these things still work in games.
When competition was announced I started googling for inspiration. I found some great macro shots of acrylic paints. They were creating awesome shapes, colours and very often they had some small air bubbles. They really caught my attention and I started experimenting with ways of replicating these bubbles on a mesh. There are few possible ways of doing this type of mesh displacement which I am aware of. One is baked Houdini/Alembic animation but I have never done that before. I tried blend shapes but they were very limited in many ways. Then I thought about creating custom textures for vertex displacement similar to the rain system made by Sebastien Lagarde in which each channel of a texture represents some piece of data which is generating procedural, UV and mesh independent rain ripples. That didn’t work very well too and was too time consuming. With each approach I started doing something simpler and simpler. I ended up with a shader which was just bulging the mesh. This can be done with one line of code:
v.vertex.xyz += v.normal * _Amount;
This gave me an idea. I thought that maybe I could use Shuriken particles as reference points and bulge the mesh around these points. I also needed some falloff and strength for each bulge region. Shuriken was perfect for this task as you can read any particle parameter from code and just sent it to a shader. And I did just that: I just sent each particle’s size, position and color parameters into a mesh shader to drive my displacement. I limited amount of particles which could do the bulging to 8 for optimization purposes and simplicity. I made a shader function which was taking this particle data and was bulging a mesh around each particle’s position with the radius equal to particle size and strength equal to particle’s Alpha. And bi-product of this approach is that it is fully procedural. This works on any mesh which has my shader applied to it. This came in handy later on as I could just swap the mesh from ugly cube to nice torus knot without any problems. You can see how particles are set-up here:
Dissolve texture is used in a most classical way possible. I used a grayscale texture to hide or show a mesh in a more organic way. My effect can be broken up to three stages: torus knot exploding turning into bubbles, bubbles travel into a new location, bubbles turn into a teapot. I wanted to make this effect as seamless as possible and sometimes simple is better than sophisticated. So in order to turn a torus into a bunch of bubbles I used dissolve texture to hide torus mesh and I used the same texture to unravel the bubbles. Then I did it again with the bubbles and the teapot. The texture itself is not that great actually, it’s just Photoshop’s “Render->Clouds” with some contrast adjustments.
I made a shader which I used for both torus and teapot. It works any mesh out there as long as it has proper, tiled UV on which I can apply a noise texture. Here are all my exposed settings seen in the material inspector:
Basically, all these parameters are driven by a particle system. In the material itself I am only setting default values or some multipliers for particle-driven params. I won’t go through every single parameter but I made a short video showing some of them in action.
I can explode a mesh and apply fake gravity force to it. The shader is squashing a mesh the closer it gets to the ground. This is using a very naive implementation which is assuming that ground is flat. I could improve the shader by using terrain heightmap as ground level etc. But this was not needed for the contest.
Other parts of the shader shown on the video are very simple – texture based dissolve and noise texture sampled in vertex shader.
One thing you probably noticed is that distorted mesh is not very smooth. I tried implementing tessellation but Unity has a bug which prevented me from using tessellation while I was passing data from vertex to fragment program. I would have to re-write the whole shader and implement deferred rendering from grounds up to get it running. Obviously I skipped this step and simply subdivided the mesh in 3ds Max.
Now we come to particle-driven data. I am using Size and Color to drive some shader parameters. I also made a custom module with few curves – this is driving mesh cutout, light emission strength multiplier, collapse strength (gravity) and allows me to reference specific mesh I want to modify.
For the final version I disabled particle rendering as they do not do anything except passing data to the shader. But for breakdown purposes I enabled rendering so you could see what’s happening:
In terms of the motion itself I wanted to resemble chemical reactions so very explosive, rapid but smooth, liquidy and soft. I decided to use a mixture of classic particle explosions and a bit of shader work. I used the one-line-code-trick shown before to explode the mesh, added particles on top of that to incorporate some secondary motion. For the bubbles I used a sine wave to move vertices and add wobbliness to them. They are also squashing when they are close to the ground just like the main mesh. The teapot is using exactly the same shader and setup as the knot but it is in reverse. So instead of exploding, it is imploding, instead of disappearing it is appearing etc.
It is a love-hate relationship which is biased towards love with each new version of Unity. I started using this engine back in 2012 and at that time it was very basic and limited. It evolved a lot since then and by a lot I mean a lot lot. It has some unique features which I really enjoy. It is fast and very flexible. Creating custom modules is very straight forward and well documented. I don’t think I could do the same effect in any other engine. Unity has it’s flaws but there are no perfect tools and the key is to understand the limitations and know how to work around them. I think Unity is very good at this.