Mongil: Star Dive, Netmarble's Upcoming Monster-Taming Action RPG
Ken Kim, CEO of Netmarble Monster, discussed Mongil: Star Dive, covering the revival of the Monster Taming IP, how the game balances storytelling, character-driven elements, and gameplay systems, and the team's approach to tailoring the experience across different platforms.
Mongil: Star Dive revives the world of Monster Taming for a global audience. What were the team's main goals when modernizing the franchise for a new generation of players?
Our main goal was to carry forward the original game's charm while reintroducing Monster Taming to a global audience in a way that feels current, approachable, and visually distinctive. First, we wanted the game to be easy to pick up and enjoy in shorter sessions, so players can jump into the adventure without feeling locked into heavy time commitments. At the same time, we strengthened story immersion through cinematic cutscenes and full voice acting, so the experience still feels rich and emotionally engaging.
We also reexamined core features from the original game and rebuilt them with a more modern design philosophy. A good example is monster fusion, which was one of the original title's signature systems. In Mongil: Star Dive, that legacy evolves into the "Monsterling Combine" system, keeping the excitement of progression and discovery while making the experience feel more refined and intuitive. We took a similar approach to combat by making direct character control feel faster, more dynamic, and more satisfying for today's players.
Visually, we aimed to create a style that feels both nostalgic and fresh. Rather than carrying over either the original game's simpler chibi-style presentation or the more realistic proportions seen in earlier iterations, we reimagined the cast with high-quality stylized rendering supported by PBR technology. That gave us a stronger sense of material detail while preserving clean silhouettes and expressive action. Characters like Mina and Verna still carry the identity longtime fans remember, but they have been fully reworked to deliver a more polished and contemporary look for players discovering the franchise for the first time.
The game blends character-driven storytelling with monster collection and fast-paced action combat. How did the team approach balancing narrative, character personality, and gameplay systems to create a cohesive experience?
Our approach was to make the story, character, and gameplay feel like they are all pulling in the same direction. Narrative immersion has always been a top priority for the team, so we designed each playable character to serve a clear role in the story and help drive each episode forward. We did not want characters to feel like they exist only as units for combat. Instead, as players progress through the story, they gradually learn each character’s personality, emotional arc, and backstory.
That progression helps build a stronger sense of attachment, so players want to keep going on the journey with them, both narratively and in gameplay. At the same time, we knew the main story alone would not be enough to fully explore every character. To add more depth, we built multiple layers of storytelling across the game, including side quests, NPC interactions, special events, and lore embedded in equipment.
These elements let us expand both the characters and the world in ways that feel organic to the overall experience. In the end, our goal was to create a cohesive structure where story gives context to character collection, character attachment strengthens player motivation, and gameplay becomes a natural extension of the narrative rather than something separate from it.
Mongil: Star Dive is built with Unreal Engine 5. What were the biggest advantages UE5 provided when building the game’s visual style and world?
We actually did not start the project in Unreal Engine 5. Development began in Unreal Engine 4, and we upgraded step by step over time. Because of that, our experience with UE5 was less about adopting its features from day one and more about evaluating how the engine had evolved to support technical challenges we had already been solving ourselves.
One of Unreal Engine's biggest advantages for us has been direct access to the engine source code. That flexibility was especially important on the visual side. We implemented a custom shading model so we could control the lighting pipeline at a deeper level and achieve the exact look we wanted. Our goal was not a fully flat cartoon style, but a stylized look with cel-shaded readability and more realistic material response.
The default shading models were not enough for that, so being able to customize rendering and lighting at the engine level was a major advantage. We also benefited a lot from tools like the Material Graph, Animation Blueprint, and the broader Blueprint framework. They gave programmers a clear way to organize complex logic visually, while also letting artists experiment more directly without relying on code for every iteration. That balance gave the team a lot of creative and technical flexibility.
The same was true for world-building. Since the project started before World Partition became part of our pipeline, we developed our own system for connecting landscapes built as separate level files. In addition to seamless streaming, our approach automatically blends terrain height and surface textures across adjacent environments with different themes. That helped us maintain natural transitions between regions, even when teams were working on individual levels.
Finally, Unreal Engine has been a strong foundation for multi-platform development. PC, console, and mobile all come with different rendering requirements and performance constraints, and Unreal Engine provides a mature toolset for optimizing across those platforms. Heavy customization does create extra work when upgrading engine versions or validating custom shaders on each platform, but even with that overhead, Unreal Engine has remained a very strong fit for the kind of game we are building.
The game features an anime-inspired art direction that combines cel-shaded character faces with more realistic materials and lighting. How did the team technically achieve this balance between stylized characters and high-quality rendering?
This was one of the areas we spent the most time refining, because cel shading and realistic rendering are fundamentally driven by very different visual goals. If you simply layer them together, the result often feels visually inconsistent, and neither side looks fully convincing. From a technical standpoint, we customized the lighting model to break point light response into more controlled stages, which helped us achieve the clearer tonal separation associated with cel shading.
For areas like the face and skin, where standard shading alone was not enough, we also used Signed Distance Field, or SDF, techniques to create more stable and art-directable results. We took a similar approach with rim lighting and specular highlights, customizing them instead of relying entirely on the default implementations so they would better support the style we wanted. At the same time, for materials like clothing, we wanted to preserve the strengths of physically based rendering because it added richness and material credibility to the overall image.
The challenge was making those PBR elements sit naturally alongside more stylized character features without feeling disconnected. Solving that balance required a lot of iteration, not just technically, but artistically as well. In the end, it was a close collaboration between the art and engineering teams that allowed us to land on a look that feels both stylized and high quality.
Combat revolves around three-character parties that players swap between in real time. From a systems design perspective, how did you build the combat framework to support fluid character switching while keeping the action readable and responsive?
Rather than building an entirely new combat framework from scratch, we focused on making the most effective use of the systems Unreal already provides. One of the key decisions was to structure Primary Assets in a very granular way based on each character’s combat role and state. Even for the same character, the required resources are different depending on whether that character is idle in the party, appearing as a helper, or being directly controlled by the player.
Instead of loading everything as a single package, we designed the system so that only the minimum asset bundles needed for each state are loaded at the right time. We took the same approach with resources such as animation montages, sound, and visual effects. Rather than loading them all up front, we rely on asynchronous loading so that each element is brought in precisely when it is needed.
This helps character switching feel immediate and responsive in combat, while also keeping memory usage under control by ensuring that only the necessary resources are active at any given moment. In terms of combat readability, that is influenced heavily by art direction and visual effects, but the underlying system still plays a critical role.
If character swaps introduce even small loading delays, the timing and presentation of the action can start to feel off. So for us, responsiveness, readability, and visual polish were all closely connected, and the combat framework had to support all three at once.
One of the game’s most interesting mechanics is the Monsterling system, where captured monsters become equipment that modifies abilities and combat effects. How did this system evolve during development, and what technical challenges came with implementing it?
The Monsterling system was built as a way to carry forward one of the original Monster Taming franchise's defining ideas while adapting it to the direction of Mongil: Star Dive. In the original game, players collected monsters, fused them into higher-tier forms, and brought them directly into battle. For Star Dive, however, we wanted the core gameplay and emotional focus to stay centered on the playable characters. That led us to reinterpret monsters as "Monsterlings," a distinct system that characters equip rather than a separate playable roster.
In the CBT, Monsterlings mainly functioned as gear that supports character growth and combat customization. At the same time, we have been continuing R&D on ways to make them feel more alive and more present in the overall experience. That includes ideas such as having them visually accompany the player or assist more directly in combat, so the bond between the character and the monster feels stronger and more dynamic over time.
From a systems perspective, the implementation itself was relatively smooth. Our equipment, ability, and skill systems were designed from early on with extensibility in mind, so adding a new item type like Monsterlings did not require us to rebuild the core structure. Because the underlying framework was already flexible, the team was able to spend more time refining the design side of the feature, especially questions around progression, fusion, and how much strategic depth Monsterling combinations should offer.
One of the more meaningful technical and visual challenges came from how Monsterlings are presented in the game. Since they are attached to characters in an equipment-like form, we wanted them to feel like more than static attachments. Earlier in development, we had been conservative about using Unreal Engine's physics systems because multi-platform support, especially performance on lower-end mobile devices, was a major priority, and the project originally began during the Unreal Engine 4 era, when there were more practical limitations in that area.
As the feature evolved, we saw an opportunity to use physics more actively so Monsterlings could have a stronger sense of motion and presence. With the transition to Chaos and broader mobile support, we were able to push that further, which helped the equipment and Monsterling visuals feel much more natural and responsive.
The team mentioned developing an in-house trigger tool to manage combat mechanics, AI behaviors, and event logic. Can you explain how this tool works and how it helped designers iterate more quickly on combat scenarios?
We know this approach is not necessarily the most trend-driven on paper. Structurally, it is a fairly classic trigger-based system, but it fits the way our team wanted to build content. In Unreal Engine, systems like monster AI, combat logic, level gimmicks, and puzzle interactions often end up being handled through different tools and workflows. We wanted to unify that into a single framework.
So we built an in-house trigger system based on modular Event, Condition, and Action units. Designers can combine those modules to create a wide range of gameplay scenarios without needing direct programmer support every time. That was the core value of the tool. Our original goal was to support both authoring and debugging in one fully customized editor experience.
In practice, due to production constraints, the debugging side did not evolve as far as we had hoped. Instead, we took a more pragmatic approach by customizing Unreal Engine's existing interfaces, such as the Outliner, Inspector, and Blueprint Details workflows, to reduce friction as much as possible. We also put a lot of emphasis on designer-friendly usability, so variables inside Event, Condition, and Action entries could be adjusted directly through a simple click-based workflow.
The system became more powerful as the library of modules grew. Once designers had enough building blocks, they could iterate on increasingly complex scenarios on their own, whether that meant making AI react to environmental changes, allowing AI units to affect each other, controlling montage behavior through variables, or linking level events, sequences, and combat actions into a more organic gameplay flow.
Designers could also tune custom variables in real time to control pacing, handle edge cases, or build safety checks without waiting for engineering support. Because new functionality could usually be added by introducing new Event, Condition, or Action modules, iteration speed improved over time instead of slowing down. Of course, the system also had limitations. The quality of the outcome still depended a lot on how familiar a designer was with the tool. Experienced designers could solve a wide range of problems by combining existing modules creatively, while less experienced users were more likely to request entirely new functionality.
And as scenarios became more complex, debugging specific variables and fine-tuning behaviors could still become difficult even when the overall trigger flow remained easy to understand. Even so, the tool has proven valuable because it gave designers much more ownership over combat and encounter logic, while also creating a shared workflow that engineering and design could keep refining together over time.
Mongil: Star Dive is launching across PC, PlayStation 5, and mobile devices. What were the biggest technical hurdles when building a cross-platform experience at this scale?
Interestingly, the area that demanded the most time in our cross-platform development was UX rather than rendering or raw performance. Touch controls and keyboard-and-mouse input were relatively manageable because there were plenty of references available, and our team already had strong experience with them. The bigger challenge came when console support entered the picture.
One of the main reasons was that the project had been built on Unreal Engine 4 for a long time, and we had already developed our own input abstraction around the project's needs. While Unreal Engine 5 offers strong built-in solutions like Common UI and Enhanced Input, fully replacing a structure that had accumulated over years of development was not realistically practical. At the same time, we needed to support controller-based navigation, focus flow, and platform-specific console requirements, which required a great deal of coordination across game design, UI design, and engineering.
Starting console testing relatively early helped us identify and resolve many of those issues before they became more costly later in production. There were also some technical hurdles we had not fully anticipated. Texture memory behavior, including alignment and format constraints, can differ subtly from platform to platform, and those differences occasionally surfaced in areas where we had customized the engine. Our custom shading model and shader code also required some platform-specific adjustments.
These were not constant large-scale blockers, but each issue still took careful investigation and validation. Beyond that, Unreal Engine's cross-platform support was strong enough that many potential issues were more manageable than expected. Hardware memory limits were also less disruptive than they might have been, because we had treated mobile as a core target from early in development. Designing with those constraints in mind from the start ended up making the broader multi-platform process much smoother.
In fact, console development ended up being less demanding than mobile in some respects. Because we chose to maintain a unified asset pipeline rather than splitting art resources separately for PC and mobile, much of our optimization effort naturally concentrated on mobile performance. That work is still ongoing through launch. In a practical sense, the toughest cross-platform challenge for us was not console, but mobile.
Optimizing a visually rich Unreal Engine 5 game for mobile hardware can be particularly challenging. What strategies did the team use to maintain performance and memory efficiency across such different platforms?
As I mentioned earlier, the foundation of our optimization strategy was a commitment to a unified asset pipeline. Rather than creating separate asset sets for PC and mobile, we chose to support all platforms with a single resource base while minimizing any visible loss in quality. That decision helped us maintain visual consistency across platforms, but it also became one of our biggest technical challenges.
On the art side, a great deal of effort went into environmental assets in particular. The goal was to keep polygon counts as low as possible while still delivering visuals that would not feel compromised on PC. Reaching that balance required a lot of iteration from the art team. On the rendering side, we made a very demanding choice by adopting Vulkan-based deferred rendering on mobile. Running a real-time lighting environment with deferred rendering on mobile is extremely challenging from both a thermal and performance perspective, but we felt it was necessary to reduce the visual gap between mobile and the other platforms.
That decision meant we then had to invest heavily in mobile optimization and thermal management. We also took a very aggressive approach to GBuffer usage. Mobile has limited GBuffer space, so we worked hard to pack and reuse data as efficiently as possible, including reallocating unused bits depending on the shading model and, in some cases, using unconventional methods to carry information across rendering passes. Since our character rendering pipeline already required a Custom Depth and Stencil pass, we used that opportunity to store additional data and effectively treat it as an extra buffer stage without adding too much overhead.
For scene depth, the PC version uses Mesh Distance Fields to give distant structures a stronger sense of volume. To approximate a similar look on mobile, we customized Mobile SSAO and pushed it more aggressively. That gave us satisfying results overall, but it also created side effects, especially on foliage, where depth could start to feel exaggerated. To address that, we separated foliage and non-foliage handling into different passes so we could tune them more precisely.
Even with all of these efforts, mobile still comes with unavoidable tradeoffs in areas like foliage density, prop density, shadow quality, and resolution. That is something we will continue improving after launch as well. In many ways, mobile optimization is not a one-time task, but an ongoing process throughout the life of the game.
Looking ahead, the team has mentioned plans for expanding the world with new regions and storylines. From a technical standpoint, how have you structured the game’s systems and pipeline to support ongoing content updates after launch?
It may sound ambitious to say the game was designed specifically for post-launch expansion, but our actual goal was fairly straightforward: build a structure that lets us keep adding content without requiring large amounts of new development every time. For in-game content, the trigger system we mentioned earlier is a key part of that strategy. New regions, gameplay mechanics, boss patterns, and scripted scenarios can often be built by combining modules that already exist within the system.
As that library grows, the cost of creating additional content becomes more manageable over time. We took a similar approach with systems like abilities and equipment, which were designed with extensibility in mind so that new Monsterlings, skill types, or gameplay variations can be introduced within the existing framework rather than requiring major rework. Out-of-game content follows a slightly different pattern. Because it mostly lives in the UI layer, it tends to require more frequent feature updates and adjustments.
To respond quickly with a relatively lean team, we are actively exploring ways to use AI to improve production efficiency. One area we are researching is a pipeline that can generate implementation code based on widget structures or design documentation. It is still an ongoing effort, but we believe it has real potential to accelerate future updates. More broadly, we see AI as something that can support many parts of the pipeline, not just UI-related work.
That could include areas like asset production support, QA workflows, and other repetitive tasks where efficiency gains can have a meaningful impact. Since our team is relatively lean, we are always looking for ways to reduce overhead and improve iteration speed, and AI is one of the areas we see as especially promising going forward.