logo80lv
Articlesclick_arrow
Professional Services
Research
Talentsclick_arrow
Events
Workshops
Aboutclick_arrow
Order outsourcing
Advertiseplayer
profile_loginLogIn

The Process of Mastering Unity for VFX & Simulation

Mirza Beig showed us his cool effects and simulations, recalled his journey with Unity and its Shuriken particle system, and shared some tips for VFX artists.

Introduction

Hello! My name is Mirza. First, I'd like to say: thanks for having me.

Some may be surprised, but my work experience and history are less in video games (for entertainment) and largely in healthcare and science communication via interactive media.

I say ‘interactive media’ because my projects and roles have been diverse, though most have centered on being a programmer.

The vast majority of my professional projects have involved some kind of "special" hardware in one way or another, and most, due to their nature, are behind NDAs.

These have been related to biofeedback, rehabilitation (stroke patients, children with disabilities...), training for virtual surgery, and/or a range of XR and display devices (Kinect, Rift, Quest, Leap Motion, Looking Glass, NeXus, NeuroSky, etc.)

Unity

I've been using Unity regularly for over a decade now. I still have a post from around the time I started using it!

10 years ago, 2 in the morning

We can rewind to autumn, 2013: a project deadline was approaching.

My original plan was to recycle a small framework I had already written for interactive 3D apps using OpenGL, but I quickly realized there wouldn't be enough time.

I had heard of Unity before (during the 2.x and 3.x era), and had worked in teams that used it, but was not directly involved with the engine. My only 'hands-on' experience up to that point with Unity was helping port libraries via DLL interfaces.

It was a surprising discovery, seeing how Unity seemed to handle all the aspects of game development I was used to having to assemble 'from scratch'.

Physics, audio, particles, animation, lighting, and sound... it was all just there, it worked, and I could extend the tools as I wished. I watched the Roll-A-Ball tutorial to get started. C# was easy to pick up coming from C++, and within hours, I was finished, maybe to my own surprise.

If it doesn’t exist, you can make it yourself in Unity, like this stylized volumetric fog for URP:

The project delivery was well received, but I was already hungry for more. I saw massive creative and career potential with this tool.

Of special note to me was Unity's built-in particle system, Shuriken.

Love at first sight

Shuriken is Unity’s non-GPU particle system. Because it’s simulated on the CPU, it doesn’t have the raw particle throughput of VFX Graph, but does bring its own unique advantages. For example, the built-in particle system offers an intuitive module-based workflow (vs. VFX Graph’s node-based editor) and excels in detailed control over individual particles through custom scripting.

A lightning effect created with multiple ‘Shuriken’ particle systems (WebGL demo, from my Lightning VFX asset).

The same lightning prefab instantiated in a real 3D game environment.

Interesting to note is that Shuriken is Unity's replacement for their legacy particle system, and it was new(ish) by the time I entered mainstream Unity development with 4.x. I had experience writing particle systems for games, but I had not used anything myself as easy and versatile.

An entire galaxy – animated and made of particles:

Comets and fireworks

Visual effects have interested me for as long as I can remember. In film, animation, motion graphics, video games... all of it!

Several colourful cinematic explosions with particle lighting, smoke, and physics (free download).

And particle systems are just so 'fun' to play with. Instant, physics-based feedback, bright lights, colours, and more. All of these things together, with the right timing and composition, can spark a range of emotions in audiences, from quiet tranquillity to sheer exhilaration.

Creepy particle tentacles, ‘feeling’ the black orb using particle force fields.

At first read, perhaps it almost seems superficial, but this is not the true beginning of my journey.

For that, we’ll rewind even further…

In the beginning, I knew nothing

I remember wondering about where I'd go next in the video games I’d play in my childhood. Books and movies had the same effect (as they do on people), but they lacked interactivity.

A still from the digitization scene in Tron (1982) – for reference.

Tron-like pixelated dissolve effect, fully procedural and controlled via shader:

The next screen, page, or shot. Island, house, or any place.. where would this adventure take me?

Naturally, I gravitated towards adventure games, which involved the resolution of an interesting story enriched with memorable characters, interactions, and fantastic settings/locales.

Over 20 years ago, I had a memorable dream where I pictured the unseen areas of a game, past the spot where I was stuck (fringe logic was a frequent challenge of classic adventure games).

It was a beautiful map, with several imagined islands from Monkey Island 2 (1991)

…all wrong of course, as I don't seem to have ESP in these matters.

It highlights a point: this dream was a product of my curiosity in the real world.

What would the next place I went to look like? Would it be a far-away, exotic land? How do I get in through this locked door, and who or what could be waiting for me on the other side? 

Wait, why is it so difficult to get in, anyway?

…and what do all of these words mean? (I did not understand English at the time...)

Child-me was very much interested in these questions. Games like Kingdom Hearts (2002), in full 3D and with exceptionally stunning art direction, added a (literal) new dimension to this deep curiosity.

Kingdom Hearts was a strong influence on my VFX journey. This post is from 2015.

In fact, prior to Kingdom Hearts, I didn't even know how far 3D video game graphics had come.

Early exposure to other games like Final Fantasy X (2001) and Prince of Persia: The Sands of Time (2003) unveiled the apex of real-time visual technologies of that era. Features like over-the-top summon animations, spectacular special attack moves and visual effects, advanced real-time/pre-rendered cutscenes, and prominent bloom post-processing, casting a soft, dreamlike feel over the entire game (looking at you, Prince of Persia) held my fascination.

Various stages of lighting and post-processing in a Unity scene.

"But how does it work? What makes it tick? What else is there?"

Even in the real world, around this same time, I had an obsession with customizing Beyblades with miscellaneous items and taking apart machine-like things so I could see how it all worked.

Toys, watches, cameras, batteries, motors, I needed to understand.

Scenic views of a quiet early 19th century London from high above at night during a full moon, and the sweeping far-reaching landscapes of Africa glimpsed from Deep Jungle's treehouse in Kingdom Hearts were compelling attractions, despite the game's abundant invisible barriers.

This mirrored my initial ignorance of the technical details of how video games are made. I would imagine how the developers would need to account for animations through 'volumetric' space, where every possible state is 'programmed in' (whatever that might mean to a child.)

Sure, my understanding at that time was practically nonsense, but the curiosity was there.

I’ve learned a lot about gamedev by reconstructing existing mechanics from games (and building on them).

Then I started learning…

Sometime later, I figured out that I could edit the plain text *.config files for Civilization II to create custom scenarios, factions, and (im)balancing tweaks. Then, I found the Halo PC demo, and from there, I moved on to the full version of Combat Evolved and Custom Edition.

This is where I picked up some modding experience and finally started to grasp how 3D games were made. Around this same time, I also picked up some Flash (MX?) skills and tried ActionScript and HTML 4 (these were actually required by the school program).

I’d stay in the computer lab during lunch, immediately after my media arts class with eyes glued to the screen, mesmerized by what I could do with Photoshop in image editing and graphics design.

These skills came in handy for my earliest VFX work (largely made up of textures and flipbook animations).

You can do a lot with Unity’s built-in particle system, without ever having to touch (custom) shaders.

On the opposite end, you can have particles that are entirely rendered in-shader, their shapes defined by math.

You can keep going until the whole world is made of particles (this one is VFX Graph).

If there was the opportunity to submit an assignment as multimedia, I would take it, using it as an excuse to create machinimas and live-action shorts with Vegas and After Effects.

These early skills would soon come in handy for my first co-op/professional work during high school in graphics design and web development. This time, I was led into script-based automation -> Visual Basic -> and then, C/C++ proper, where most of my learning came from working on my first (text-based, and multimedia) video game.

This brings back memories.

As it turns out, math is useful in life after all, especially in gamedev, shaders, and tech art.

Like pieces of a puzzle

This is when my mix of skills made sense. 

It was an unbelievable moment of clarity, as I entered the world of gamedev through what seemed like a series of unconnected events in my life. I might have pursued a career in criminology, forensics, or even psychology but the moment I learned about "game development" as a viable professional option, it was game over. 

This is something I wanted to do, and I wasn’t even sure I’d be any good at it. 

Little did I know that my disparate blend of skills would eventually synchronize in perfect harmony, unified by my work in visually engaging, interactive software development.

All those days/nights, in and out of school, tinkering with Photoshop and After Effects. This stuff was actually useful! And people were willing to pay me for my skills.

I learned to digitally vaporize the ocean with an electric particle comet in After Effects in mid-2012.

Wait, it’s all a movie set? Always has been…

One day, I saw someone demonstrate a glitch in Kingdom Hearts's last visit to Destiny Islands. Sora jumped up on a shack in a corner and ended up breaking out of the invisible game boundary. I realized I could do this in any game using memory editing.

I could break past the invisible barrier anytime, anywhere. It felt like a superpower: these 'living' worlds were free to explore. I could finally see all of Africa from Deep Jungle, and all of London from Neverland (both in Kingdom Hearts).

Breaking out of the Deep Jungle waterfall cave in Kingdom Hearts.

But time and time again, one broken barrier after another... I saw nothing. I did this with several games I had played to try and understand how they were made.

Outside the gates at night in Olympus Coliseum.

It was all a movie set. Every time.

Final Fantasy X’s opening level, the futuristic city of Zanarkand… is mostly just imposters in the distance.

You were only shown what you were supposed to see, there was barely anything else there.

"…well, duh? What did I think I’d find?"

But I had to see the movie set with my own eyes, so I’d know what it really looks like.

This didn't (entirely) 'ruin' games for me. I still found it amazing how developers constructed these ‘illusions’, and appreciated the technical-artistic details of their implementation.

Galaxy VFX breakdown, tutorial here.

Beauty in nature and the design of natural systems

If you can bear with the philosophical babble, it's interesting to think about, isn't it?

We enter this world not knowing anything at all, and we learn, and we're taught.

I remember being amazed at how our own bodies work: reading about plants, animals, organs, and cells. We are ourselves miraculous machines, more advanced than any human technology.

Stylized visualization of neurons in Unity, from a touch-based haptic VR experience for Multiple Sclerosis (MS).

More of the VFX, shader, and interactive XR/VR work I did on the MS project.

But you can’t really go around taking humans apart and putting them back together again, at least not alive. Even well-documented surgeries have risks. I’ve been with medical students at work, chipping away at cadavers in a hospital basement, practicing for the ‘real’ thing with the living.

I much prefer the smell of new electronics over corpse-preserving formaldehyde.

A beautiful organic fractal animation shader by zozuar, stylized and lit in Unity with HLSL.

Computer graphics are safer to work with and easier to explore: no humans have to be disassembled and put back together again.

It’s all code/instructions, and pixels, like this visceral, pulsating alien biology (scentless).

Even so, I’ve often found myself simply observing the natural world, or having to learn more about it through additional materials (books, articles, studies, and more).

Generating increasingly complex lighting fractals → animated into colourful electric arcs.

Procedural lightning on a spline.

The challenge is in translating real-world mechanics/systems into something functional, sufficiently performant, and convincing enough for a given audience for the purposes of the application.

AAA-quality (it has to look ‘next-gen’), procedural water shader 60fps+ mobile (S22).

I know this feels like I've fallen into a tangent, but this is all part of what drives me.

By the time I was using Unity, I had already been deconstructing video game mechanics and features to better understand them. However, because I’d have to make most, if not all, the underlying tools ‘from scratch’, progress was either slow or simplified into primary mechanics.

Unity would change much of that.

A Virtuous Cycle

Even when it wasn't work-related, I was fortunate enough to still have the time and resources required to freely explore gamedev with Unity. This meant a lot of time deconstructing existing mechanics, effects, and features from games that had caught my attention.

The opportunity to implement custom lighting often catches my attention.

A straightforward cycle of I want such-and-such feature. How can I do it in Unity?

Research > implement > tweak, modify, learn, improve, advance > repeat.

I’ve worked on several water shaders over the years. Each time I pushed myself further and did something new.

Once you find sustainable motivation, you can simply keep on going. I suppose that applies to anything in life!

Education

While I eventually ended up with a Master of Science degree (Computer Science), I don't consider the courses I've taken to be particularly relevant. Being mostly interested in freely learning and exploring at my own pace, I’d sit in classes I wasn’t signed up for, and skip others.

For the curious, here are some I remember from my actual syllabus:

  • Several programming courses, all C/C++
  • Data Structures and Algorithms
  • Networking (as in, low-level computer and online technology)
  • Computer Architecture (hardware studies, assembly programming…)
  • Game Design, HCI
  • 3D Modelling (w/ Maya - which I haven’t touched in many years)
  • Multiple business courses (accounting, marketing, business for games)
  • Graphics Programming (shaders, rendering, and simulations)

The academic and competitive setting (with/against other students) along with some very helpful professors was more important in getting me to where I am now.

Those same pre-requisite skills from high school in basic scripting are applicable today.

Iteration

One of Unity's main strengths lies in how easy it is to iterate on ideas. A lot of the basics are handled for you, so you don't have to reinvent the wheel.

There are parts of the API exposed and set up specifically to be able to manipulate the data in existing systems, while they run in real-time: particles, physics, animation, and more.

For example: you can add custom behaviours to particles for steering and per-particle tracking.

Transforming particles into violent, anti-cube missiles with custom steering behaviours and object tracking.

With Unity, I can focus on the ‘fun’ mechanics. Like passive interactive particles that sting nearby players.

Another big plus with Unity is its "develop once, publish everywhere" strategy.

Realistically, you'll likely have to do (at least) some additional preparation for cross-platform development (performance, load times, shader features/levels, storage), but the most tedious, ‘non-creative’ things are done for you and out of the way.

Advice

This is the advice I’d give to myself as a VFX and tech artist.

  • I'd recommend simply being active:
    • Do as much as you can, within reason (don't burn yourself out!).
      • Build your portfolio by posting images and videos of your work.
      • Document your progress and explorations. It may help someone down the line and brings functional use to visiting and checking out your content.
    • Put yourself out there, market your skills.
      • Show what you can do.
        • Art is a visual medium.
        • Present it in a way that’s easy to digest.
          • Short clips, tips, gifs, image/infographics.
      • Show how you do it.
        • Prove your process.
        • Share your knowledge.

Your journey starts now

If you’re just getting started with Unity, remember that it’s one of the most popular game engines and has been around for almost 20 years. There are many great resources scattered across the web for just about anything you’d want to do.

  • There are also some great community resources, via the r/Unity3D subreddit and Unity forums.
    • If you ask questions, there are people who are very knowledgeable that may answer, including Unity employees on the official forums. The latter is especially important when it comes to figuring out the behaviour of certain systems within the editor.
  • While Google is a tried and true resource, I’m seeing a lot of potential in GPT ‘AI’ models.
  • I’ve configured one on ChatGPT specifically for Unity and technical art (shaders, VFX):
    • You can try it here.
      • It’s a public GPT, but using customized models is currently a feature of ChatGPT Plus.

Just be wary that these AI will sometimes confidently deliver wrong answers.

Engaging with professionals, people with more experience than you have, is also a great way to learn.

Out in the wild

I’m most active on Twitter/X these days, where I post tricks, tips, and tutorials on Unity, game development, After Effects, shaders, tech art, and of course, analysis of game visuals + mechanics.

You can also find my longer video tutorials on YouTube.

I have some exciting projects lined up!

Bioluminescent water with fluid shader.

Stylized volumetric fog with indirect/baked lighting and procedural fluidic motion.

Post-processing animated sketch-outlines shader.

Multi-input GPU fluid simulator (with compute shaders).

Mirza Beig, Technical Artist

Interview conducted by Theodore McKenzie

Join discussion

Comments 1

  • Annis Laurie

    Wonderful interview of an incredible creator!

    0

    Annis Laurie

    ·2 months ago·

You might also like

We need your consent

We use cookies on this website to make your browsing experience better. By using the site you agree to our use of cookies.Learn more