logo80lv
Articlesclick_arrow
Research
Talentsclick_arrow
Events
Workshops
Aboutclick_arrow
profile_loginLogIn

Creating Story in Cyberpunk Animation for Chasm's Call Challenge

ByeongJin Kim, who has won 4th place in Pwnisher's Chasm's Call challenge, shares the creation process behind the futuristic Transfer scene and explains in detail how the VFX were made with Niagara.

Introduction 

Hello, my name is Kim ByeongJin, and I work as a VFX artist based in South Korea. I am also known as HoldimProvae in online communities. I didn’t originally aspire to become a game VFX artist. I majored in 3D game programming at university.

I first developed a serious interest in game art while working on my undergraduate team’s graduation project. As the only team member capable of creating 3D assets, I ended up building all the 3D game assets myself for the first time. That experience sparked my curiosity about game art and inspired me to explore it further. Later, I became deeply fascinated by VFX when I built my own particle system using DX12. Although it was a simple system that only allowed me to input lifetime, velocity, and transform structs to generate flipbook-based particles, I found the process incredibly enjoyable. That experience ignited my passion for in-game VFX.

I believe the most effective way to acquire VFX skills is by facing diverse challenges. Even as a junior real-time VFX artist, I was tasked with creating VFX that interacted with in-game mechanisms using Blueprint. It wasn’t just about displaying visual effects; it involved creating effects that directly influenced and responded to gameplay, which came with many technical challenges. However, thanks to this work environment, I had many opportunities to encounter different cases and conduct R&D within a short period of time.

Outside of work, participating in various challenges also played a significant role in my growth. I’ve taken part in not only the real-time VFX forum monthly sketches but also art challenges hosted by Epic Games, ArtStation, and Riot Games. These experiences provided valuable opportunities to face diverse conditions and requirements beyond company projects, and they were instrumental in helping me improve my VFX skills. If you're interested, I’ve been uploading my previous challenge submissions to my ArtStation.

What attracts me most to the VFX role is that it comes at the final stage of the game production pipeline. As VFX artists, we add the VFX layer on top of all the art team’s work, putting the finishing touch on the visuals of the project. It feels like the icing on the cake. I also love how VFX always brings interesting technical challenges that keep me engaged and motivated.

Currently, I am working at a small game studio in South Korea as the VFX artist for a zombie extraction shooter project called The Midnight Walkers.

Chasm's Call Challenge

I’ve been following Pwnisher’s challenges for a while now. I first discovered them during the Alternate Realities challenge, but I did not participate at the time because I was fully focused on real-time game VFX. I officially started participating with the Eternal Ascent challenge. Around that time, I was teaching myself the full process as a hobby, from building environment assets to animating characters, and I saw this massive challenge as a great opportunity to test my skills. Eternal Ascent, Kinetic Rush, and now Chasm’s Call make this my third time joining a Pwnisher challenge. These are the submissions I created for the previous Pwnisher’s community challenges.

For Chasm’s Call, my reference search began with Rule 3 of the challenge guidelines, which stated: "You must create a sense of depth," and so I focused on the word "depth" first. One artist who immediately came to mind was Max Hay, a 3D environment artist renowned for his exceptional use of depth. While browsing his ArtStation, I came across a piece called "SENSORY SYSTEM: ONLINE."

As soon as I saw it, it reminded me of several games and characters I had played before. I started breaking down those elements and recombining them to build my overall concept. With only 5 seconds to tell a story, I thought it was important to leave a strong impression in that short time. I also felt that storytelling was the most effective way to make that happen.

Because the theme was Chasm’s Call, I wanted to go beyond a simple scene of a character standing on a shelf looking into a chasm. My goal was to make the visual flow between those two spaces form its own story structure. After thinking about how to express that idea, I developed the concept of The Transfer.

The story and environment ideas were inspired by Portal 2’s narrative and the key art of Lucy from Reverse: 1999. For the main character reference, I took inspiration from various anime-style game characters, including Key from Blue Archive, Sissela Kyle from Eternal Return, Cristallo from Reverse: 1999, and Angela from Lobotomy Corporation. I was influenced not only by their visual styles but also by elements of their stories. I used those ideas to list out key props and made a checklist that guided me throughout the production process.

The initial composition was based on the Blender template provided at the start of the challenge. Using Blender’s default geometry makes it easy to quickly block out forms and experiment with different layouts. Since the main theme of this challenge was "Chasm," I started by centering the layout around a massive cylinder structure. I wanted the facility to feel vast in scale, so I set the scene size quite large from the beginning.

There were two main things I focused on while building the composition. First, I wanted to maintain a circular silhouette that followed the curve of the template’s shelf. Second, I designed everything so that each element would naturally guide the viewer’s eye toward the final focal point of the story. In the early blockout phase, I built the level layout as a giant cylindrical structure to match the chasm concept and used torus geometry to help shape the overall composition so that the circular silhouette would be clearly visible from the camera angle.

I set the scale of the whole scene based on the size of the robot. I started by placing the robot within the frame and adjusting its position to fit the ideal screen space. After that, I fine-tuned both its position and scale. Once the Blender scene was ready, I imported everything into Unreal Engine and began the main actor placement process directly inside the Unreal Engine.

Assets

I didn’t have enough time to create every asset from scratch, so I had to rely heavily on existing assets and make the most of them. KitBash3D is great for that. It offers a wide range of themed kits that let you visualize ideas quickly and efficiently. But because of that, it’s likely that other participants in the challenge were using the same kits too. If I just used the assets as they were, my work risked looking similar to everyone else’s. That’s why I needed to customize the assets to fit my concept.

KitBash3D’s Cargo plugin makes it easy to import kits into different programs. I started by browsing sci-fi kits through Cargo. For this challenge, I used assets from the Future Warfare, Sci-Fi District, Cyberpunk Interiors, Emergency Response, Secret Lab, SpaceShip, and Heavy Metal kits.

My main selection criteria were simple. I needed large steel structures that could convey a sense of scale, and I needed plenty of cables and supporting structures to fit the “Transfer” theme.

Once I wrapped up my concept on the first day, I imported the kits into Unreal Engine and started placing them in a blank level to check the geometry. Most of the structures were made up of multiple parts, so by breaking them down and reassembling the pieces, I could create entirely new forms like building with LEGO.

Since my scene was designed on a massive scale, each building could serve as a single part of the overall facility. That’s why, in the early stages, I spent most of my time searching for the right parts.

The trickiest asset to modify was the structure that supports the robot’s hand. I used the airship model from the Heavy Metal kit for that. I chose it because I needed something sturdy enough to hold up a giant robot, but I also wanted a curved silhouette to keep the visual style consistent. The top platform of the airship looked perfect for that.

However, the top part of the model was a single solid piece, so I had to make manual edits. I deleted all the lower faces, and since the underside might be visible depending on the camera angle, I modeled a new bottom section from scratch. To create the curved shape, I subdivided it along a specific axis, applied a mirror modifier for symmetry, and then combined a curve modifier with a simple deform to bend it into a circular form.

Most of the structural assets were customized in Blender using a combination of modifiers like this. I could have used Geometry Nodes for more advanced edits, but for this project, I didn’t need that level of complexity, so I stuck with Blender’s standard modifiers.

Character

Let me start by explaining the shared pipeline I used for character creation in this challenge. VRoid is great for quickly generating a base character model, but it does not support geometry modifications for clothing or custom details. To achieve the visual style I wanted, I could not import the VRoid model directly into Unreal Engine. Instead, I first brought it into Blender for further processing. Since VRoid uses the VRM format, I installed the VRM Addon For Blender to import the model into Blender.

After that, I modified the geometry and created keyframe animations in Blender, then exported everything as an FBX file. I imported this FBX into Marvelous Designer. I’d never used Marvelous Designer before. In fact, this challenge was my first time working with it, and I needed to learn the tool quickly within a limited schedule. Luckily, Marvelous Designer provides an official YouTube playlist for reference. I followed that playlist and focused on learning the functions I actually needed.

For this challenge, I created three different cloth simulations using Marvelous Designer: one for the Ghost standing on the right side of the shelf, one for the Skeleton sitting on the chair, and one for the Girl character. Since this was my first time using Marvelous Designer for clothing simulations, I started with the simplest setup first.

The first simulation I worked on was the Ghost’s outfit. I imagined him wearing something that resembled a facility manager’s uniform, so I decided to go with a lab coat. Instead of designing the garment from scratch, I wanted to test the workflow using a premade asset. I purchased a lab coat garment from CLO-SET Connect, fitted it to my custom character, and ran the simulation. Once completed, I exported the result as an Alembic (.abc) file, then imported both the character’s FBX and the simulated cloth into Unreal Engine and set everything up in the scene.

Once the Ghost was properly imported, I moved on to the Skeleton. I initially tried skinning the clothing directly to the skeletal mesh and placing it on the chair, but it looked unnatural, almost like it had skin underneath. To better match the bony silhouette, I ran a cloth simulation in Marvelous Designer. Since the Skeleton is a static character, I exported the result as an FBX instead of Alembic.

Finally, I worked on the Girl’s outfit, which turned out to be the most challenging. The biggest issue was persistent layer conflicts between the underwear and the outer garment. To solve this, I deleted the inner garment patterns that would be hidden underneath the outer clothing and not visible in the final render. I also used the Tack on Avatar feature to make sure the clothing stayed properly attached to the character during simulation, ensuring a clean result.

For the robot, I used a different approach. I started with a base model from VRoid as well, but since I wanted a mechanical structure for the body, I deleted everything except the head. However, I had limited time to work on it and needed a faster method. I realized that Unreal Engine’s mannequin already had a mechanical look, and the UE4 version in particular had separated joints, which made it easier to modify. I imported the UE4 mannequin into Blender, broke it down into individual parts, and reassembled it into a new design. The abdominal section was inspired by the work of Korean sci-fi character concept artist Park Jinkwang, and I exaggerated the mouth into a jagged, gear-like shape to make the expression clearly readable even from a distance. This stylistic choice was inspired by Liam Vickers' character designs.

I left the inner mechanical details exposed without covering them with a skin to emphasize the energy flow and highlight the mechanical feel. I also used Blender’s Curve tool extensively during this process to create the cables.

Unwrapping & UVs

When creating assets for a challenge, quality is important, but the most critical thing is finishing within the deadline. It is always helpful to have premade assets ready, but in most cases, we need to create new assets based on the ideas that come to mind after the challenge theme is announced. That is why having an efficient workflow is essential.

For example, when I created the robot character, I started by breaking down the Unreal Engine mannequin into separate parts and reworking them. Since the geometry changed, I had to re-unwrap the UVs from scratch. I also needed to clean up the topology, and for that, I used QuadRemesher by Exoside. This tool automatically retopologizes high-poly meshes into clean quad-based geometry, which saves a lot of time. Quad-based topology also makes it easier to mark UV seams quickly based on loops.

Substance 3D Painter also supports an Auto-Unwrap feature, so another method I used was importing the mesh directly into Substance 3D Painter, applying Auto-Unwrap there, and then exporting the mesh back into Blender to continue with further work. However, I mainly used Auto-Unwrap when I was pressed for time, such as during the final week of a challenge. If possible, manually controlling the seams gives much better results and makes texturing easier to manage. This is especially true for characters. Applying Auto-Unwrap across the whole model can easily cause problems in areas like the face, so it’s generally a good idea to unwrap those parts by hand.

Texturing

Let me start by talking about the environment assets. As I mentioned earlier, all of the background assets were created using kits from KitBash3D. These kits come with proper UV layouts and trim sheets, and when you use the KitBash3D Cargo plugin, they also include materials that are ready to use out of the box. But while I was modifying and rescaling the assets to fit the composition, I ran into some unexpected texture stretching in a few areas.

To address that, I tweaked the materials provided by the KitBash3D Cargo plugin so I could choose between using World Aligned sampling or the original texture coordinates, as shown in this screenshot.

Next, the new 3D character assets I created for this challenge were all textured in Substance 3D Painter. The challenge schedule was tight, so it wasn’t realistic to create every material from scratch. Instead, I based the core PBR materials on assets from Adobe’s Substance 3D Assets platform and KitBash3D PBR material textures. After importing the meshes into Substance 3D Painter and baking the textures, I used the generated AO maps and curvature maps to drive the masking.

When I texture in Substance 3D Painter, I usually work with a fill layer workflow. I add a black mask to each layer and apply different generators to control the masking. For this project, since there were a lot of metal objects, I made heavy use of the metal edge wear generator to quickly create that worn metal look. I also used the light generator, which lets you create a mask that simulates how light reflects off a surface from a specific direction.

You can see this in the screenshot – it was used to create the shadow that falls across one side of the robot’s face, giving it a stylized, anime-inspired look. Since the robot was positioned very far from the camera, it was easier to paint the shadow directly into the texture rather than setting up custom shaders or lights in the scene. It was simply a much faster way to achieve the result I wanted.

Out of all the materials I created for this challenge, the most complex one was the screen material. Here’s a GIF of that screen in action. I’ll briefly explain some key points. For the TV noise effect, I downloaded a free TV glitch stock footage clip from the internet. One way to use media files like that in Unreal Engine is to convert the video into Bink format, import it into Unreal, and play it back from there. There’s a good official Unreal Engine guide on how to work with Bink files if you’re curious.

For the screen’s different phases, I created the materials in Substance 3D Designer. I packed all the necessary textures into the RGB channels to keep the sampler count as low as possible and make it easier to manage everything in one place. Each screen phase was blended using Lerp nodes. Since all the phases shared the same base textures, I decided it was more efficient to manage them within a single material rather than splitting them up into multiple materials. I controlled the phase transitions by adding keyframes to the material parameters in the sequencer.

Finally, I applied a CRT effect to each screen. For that, I followed this tutorial I found on YouTube.

Animation

First, I want to point out something important when working with VRoid. If you use the default Vroid skeleton as-is, you will notice a problem where the wrist skin twists or collapses unnaturally when the hand rotates. This happens because there is no twist bone in the lower arm. To achieve smooth and natural deformation, especially in engines like Unreal, you need to add a twist bone. I subdivided the lower arm in the girl’s skeletal armature to spread out the vertex weights, and then I manually created the keyframe animation in Blender.

For the character animation in Blender, I referenced a lot of tutorials from the P2Design YouTube channel.

Since I am not a professional animator, creating a single animation took quite a bit of time. To minimize trial and error, I planned out in advance which bones would use FK and which would use IK before starting the animation work.

As a side note, before this challenge started, there was a weekly challenge in the Create with Clint Discord based on the Shock theme. I worked on an animation of a character getting electrocuted for that challenge. That experience helped me apply a similar approach this time and avoid mistakes along the way. Challenge periods are short, and there is often not much time to experiment. In situations like that, being able to draw from past experience really helps to work more efficiently. This is why I believe trying out different types of projects regularly is always valuable.

Animating the robot and the ghost was much easier. Their motions were far more static than the girl’s, and in the case of the robot, I did not need to smoothly blend vertex weights across joints. Instead, I could assign weight values of 0 or 1 to each bone, which made the skinning process much simpler. The only part I set up with Automatic Weights was the robot’s internal cables. Even then, because the joints did not move much, I was able to finish everything cleanly without needing precise adjustments.

For secondary motion on the hair of each character, I used the Kawaii Physics plugin directly inside Unreal Engine. Lastly, I used a combination of two methods to control the character's facial expressions. First, I adjusted eye blinks and facial expressions using morph targets driven by curves, and then controlled the eye direction through the Unreal Engine Control Rig setup.

I handled the cable animation in two different ways. First, the cable connected behind the robot, which is seen from a distance, was animated using Unreal’s Cable Component for real-time simulation. In contrast, the power cable connected to the shelf was animated without simulation. I used a vertex world offset technique based on a SinWave inside the material to create a sway effect. I controlled the strength of that sway with keyframes in the sequencer to make sure it moved at the right timing. This approach worked because the shelf cable only needed to sway back and forth, which made the setup much simpler.

VFX

All of the VFX for this challenge were created using Unreal Engine’s Niagara system. As a real-time game VFX artist, I approached this project the same way I would for real-time game VFX production.

Every smoke texture I used in this challenge was created with Embergen. Embergen offers a wide range of presets, and all the textures for this challenge were based on the game_explosion_1 preset. In the Kinetic Rush challenge, I exported the simulations as VDB files and rendered volumetric smoke using Heterogeneous Volumes. This time, I exported the simulations as flipbook sequence textures and spawned them as individual flipbooks. Since the smoke elements in this challenge were positioned far from the camera, spawning them as billboards maintained visual consistency without breaking immersion, while also achieving much more efficient rendering performance.

All of the textures, aside from the smoke, were created in Substance 3D Designer. Most textures used for VFX are mask-based, so they are typically grayscale. I followed the same approach, building most of my textures in grayscale. I used a procedural workflow, which made it easy to tweak and generate variations quickly. This method was especially useful for producing seamless textures in bulk, which is perfect for creating various noise textures specifically for VFX.

For the VFX materials in this challenge, I focused on keeping them as lightweight as possible, using a minimal number of materials and emitters. Since my submission had a sci-fi theme, the VFX leaned more toward physical effects like sparks and smoke rather than magical visuals. For this reason, it was much more important to create situational VFX that matched the environment rather than rendering flashy or overly complex effects. I built several small-scale Niagara systems and placed them strategically throughout the scene. Most of these systems, as shown in the screenshot, used only the bare minimum number of emitters.

The light shaft elements used for the impact effect when the energy reaches the robot were not rendered with actual volumetric shafts or god rays. Instead, as shown in the screenshot, they were created by strategically placing multiple billboards. Since the robot is positioned very far from the camera, it was possible to achieve a convincing visual even without physically accurate rendering. The Chasm wireframe FX element that appears at the robot’s core was created by modeling a blackhole-shaped mesh and applying a grid texture with UV panning.

Each Niagara system was built with a combination of smoke and spark particles. For the lightning VFX used in the Ring Structure, I combined Niagara’s Ribbon Render and Beam Emitter modules. By using the Jitter Position and Curl Noise modules, I was able to create the lightning visuals with a simple setup.

The energy flow for the Ring Structure and Energy Tube was implemented using texture UV panning techniques inside the material. I created a Curve Atlas asset in Unreal Engine and used the Curve Atlas Row Parameter node in the material to control the masks and color maps easily.

The energy pattern texture was inspired by the title of this challenge, Chasm’s Call. I designed a seamless texture based on the shape of the letter C. I thought this would be a good way to incorporate a geometric, sci-fi visual while also giving the texture a distinctive feel.

This energy pattern texture was used not only for the energy tube but also for the transfer energy VFX for the heroine character.

I used the Niagara Ribbon Renderer here. I wanted to control the trail shape of the Ribbon Renderer to match my intended design. To achieve this, I passed a spline through a User Parameter and used the Sample Spline Position by Unit Distance node inside a Scratch Pad Module to sample the spline’s position. The particles moving from the character toward the machine were spawned at the character’s position using the Skeletal Mesh Location module and were driven by a Point Attraction Force module in the Particle Update stage. Additionally, I added refraction to the smoke material to create a distortion effect around the character.

Final Scene

I always work on a laptop, whether it’s for personal projects or challenges, so offline rendering was not an option. For this project as well, I did not use path tracing and instead rendered everything using the Movie Render Queue Deferred Rendering setup.

Thanks to Lumen, I was able to achieve a decent level of global illumination, but the most important thing was to set up the lighting exactly as I envisioned it. To enhance the scene’s GI, I placed multiple point lights and spotlights in strategic locations. Fortunately, with the new Megalights feature introduced in Unreal Engine 5.5, I was able to use a large number of point lights while still keeping performance stable. However, since there’s a limit to how many important lights can affect a single pixel, optimization steps like narrowing the light attenuation range are still necessary.

I wanted the lighting to transition naturally from cool to warm tones as the story progressed. To achieve that, I added keyframes to the color values of the light actors in the sequencer, as shown in the video. The early part of the scene was inspired by the cool, blue lighting of a hospital at night, while the later part took inspiration from the heat and glow of a steel mill.

For the lights placed with high intensity to highlight the characters, I wanted them to look like practical light sources that would actually exist in the facility, rather than just artificial stage lights. To achieve that, I used Unreal Engine’s Light Function Material to simulate light filtering through a ventilation fan.

Challenges

The project timeline was one month, which was the duration of the challenge. However, I was working full-time and also preparing for the public alpha test of our studio's game, so like most people, I could only fully focus on the challenge during weekends. My biggest challenge was managing my time and deciding where to focus my energy. I had to carefully consider what was worth investing time in, how to handle asset processing, and how to set clear priorities.

I had already experienced similar struggles during the previous Kinetic Rush challenge, so this time I was able to work more efficiently. Ideally, I would have liked to add detail to every part of the scene, but since time was limited, the most important factor in deciding priorities was how much each element contributed to the storytelling.

The part I enjoyed the most was adding VFX. This step always feels the most rewarding because it’s when the render finally starts to feel complete. Since VFX is my main focus as an artist, I was confident I could work quickly, and I dedicated the final weekend, both Saturday and Sunday, to completing all the VFX.

One of the biggest things I learned from this challenge was related to the theory behind environment placement. Level design is not something I get to do often outside of challenges, so I really enjoyed having the chance to test out ideas freely. Before the challenge, I studied a lot of environment design content from Max Hay’s YouTube channel. Since the main focus of this challenge was on expressing depth, I kept the lessons from his tutorials in mind while adjusting the composition and layout.

Conclusion

I was completely surprised to have my submission selected as one of the Top 5. The main reason I joined the Chasm’s Call challenge was to apply the lessons I learned from the previous challenge, Kinetic Rush. So I’m really happy that following through on those lessons turned out to be the right direction.

That said, I think luck also played a big part in this result. It made me wonder whether I could make it into the Top 100 again in the next challenge, and how much more I could improve from here. Of course, even if I don’t make it into the Top 100 next time, I think that’s totally fine. What really matters is not the outcome, but the process of enjoying, learning something new, and bringing my own story ideas to life.

Pwnisher’s 3D community challenge is held twice a year. If everything goes as expected, the next one will likely begin around August. Because the template is only revealed at the start of the challenge, it's difficult to prepare assets that perfectly match the theme in advance. So instead, I plan to focus on improving the technical areas where I felt I was lacking.

In this challenge, I spent a lot of time on animation, so for the next one, I am planning to study a more efficient animation pipeline. Right now, I’m very interested in keyframe animation tools like Cascadeur or AccuPOSE, and I am hoping to incorporate them into my workflow for the next challenge. Until then, I am planning to work on small personal sketches. I really hope to see everyone again in the next challenge.

You can find me on ArtStation, X/Twitter, Instagram, LinkedIn, and BlueSky.

ByeongJin Kim, VFX Artist

Interview conducted by Amber Rutherford

Join discussion

Comments 0

    You might also like

    We need your consent

    We use cookies on this website to make your browsing experience better. By using the site you agree to our use of cookies.Learn more