Kasem Challiou told us about MoonlanderAI, discussed the company's proprietary Generative Reasoning System, and explained how it can create billions of different levels from a mere 600 assets.
Introduction
My name is Kasem Challiou, Founder and CEO of Moonlander. I am an electrical engineer by education, passionate about applied mathematical concepts, and a digital transformation specialist by profession. I worked in industrial robotics, vision AI, and digital twin technology at companies such as Siemens, Stork, and eVision (acquired by Wolters Kluwer).
I met our CTO and CPO a couple of years ago, and we shared the same vision and passion for spatial computing. David, our CTO, is a cross-tech back-end engineer with a strong background in media and arts and an expert in blending technologies. Ruben Vroman, our CPO, has an extensive background in front-end game development and deeply understands the aesthetics of game development. The team loves to operate at the intersection of game development and procedural AI.
MoonlanderAI
While I worked at eVision, we led the company (currently Wolters Kluwer) to become the global market leader in digital software for complex hazardous activities. These hazardous activities were visualized in 2D/3D maps or digital twins built on Unity or Unreal Engine. Surprisingly, these digital twins were not fun; they could have been better made with proper aesthetics, core game mechanics, and dynamics. As a passionate gamer, I concluded that professional game studios could create better digital twins. When I engaged with these studios, I was confronted with production costs in the millions of dollars to build these gamified digital twins. This experience partially inspired the idea for Moonlander.
Gaming is eating the world - that's what we say at Moonlander. The latest trends in spatial computing and AR/ VR are an example of this statement. While the gaming industry is in a downturn, one cannot deny the fast-growing 600 Million MAUs on the famous UGC platforms. However, these UGC platforms are walled gardens that only allow customization within stringent guidelines within their platform. This led to the creation of our "Creator Assistant", which helps creators and developers build virtual worlds like a pro with their owned assets or acquired assets from the asset stores. Consequently, we have been quietly focusing on our framework that enables asset tagging and positioning with natural language and Machine Learning.
Due to the lack of standardization in the game development process, we believe the game development industry is ready for disruption. That is why the Moonlander team is building technology based on best practices, agnostic and cross-platform deployment without the need for a big budget. With our technology, we aim to make a UGC from any IP.
Research & Development
Moonlander's R&D team, comprising gaming veterans and product specialists who worked in top gaming universities, mobile games, and AAA titles, strongly believes in the synergy of procedural technology and AI. Due to this multidisciplinary team, we are able to combine cross-industry technology best practices that resulted in a horizontal framework operating on our Generative Reasoning System (GRS), a text-to-generator setting technology that is powered by machine learning.
In the last year – with the release of Co-Pilot and our Alpha version 1 SDK – we have learned that there was a mismatch between our Level 1 text-prompt technology and the expectations of our early adopters. Our Alpha version 5 has significantly improved support for Level 2 and Level 3 text-prompting to better contextualize the question for asset positioning and generator settings, allowing creators to generate detailed settings for their generators for positioning and level creation.
It works as follows: e.g. You can text prompt "Palm Beach Tropical Island", the machine learning will then aggregate assets based on the words, and the Level 2 and Level 3 machine learning will then adjust the height, scatter the object coherently and create procedural settings that normally should have been done manually in a graph or node based system. Once everything is aggregated from the asset library, we polish, tag, and position the asset with a high-fidelity approach to make coherent and beautiful worlds. Every decision can be overridden and we give the creator full control and flexibility. Something that saves an hour per asset. If the assets are not available, we are working on integrations to generate them via third-party AI-based asset-generating tools.
This text-to-generator technology can also be applied to game mechanics and dynamics with the same mathematical principles. We have tested this, and the first results are astonishing.
The System's Features
We offer a radical new way of game development and level design by combining our proprietary GRS and our integrated AI asset tagging technology. With Moonlander, we use any IP or asset from a library that is game-ready, and our GRS places all these assets in a procedural or generative way coherently and beautifully. The user can override the decisions made by our GRS and tweak it to perfection leading to a faster time to market. A virtual world consists of multiple layers and our GRS generates the procedural or generative settings per layer and stacks them accordingly in a generator stack. Something that can take days to months depending on the fidelity of the virtual world.
This leads to faster production readiness with a big part of the game already developed. Additionally, studios can configure and design “automated decisions” into the Moonlander framework and develop a whole ecosystem of games at once, instead of a single game. This leads to significant time reduction and automation of repetitive and time-consuming tasks without losing control over the output. Our early adopters have used our technology over the entire lifecycle of game development from ideation, near-real-time prototyping, production, and support, and have seen time savings that can be applied across the gaming industry.
Our Creator Assistant ML is here to help game developers not to eliminate them. We want to help everyone bring their ideas to life faster and publish them cross-platform. Additionally, we are validating our Creator Assistant with AAA studios who have provided significant feedback for us to improve and become a best practice for certain tasks.
Game Engine Compatibility
Currently, our tool is exclusively available on Unity, a strategic decision influenced by Unity's 1.5 million MAUs. While Unreal has developed PCG, we know it’s tough to master. Unreal PCG is a geometry script system that is challenging to set up, but very versatile, and Unity Procedural asset packs are more accessible but difficult to use with your own assets. That's why we created something as versatile as Unreal PCG and as accessible as a Procedural asset pack with a very user-friendly interface, asset management, asset tagging, and text prompt technology. Our GRS digests text and manual edits to create generator settings for each layer and stack them as a generator stack fully automated. While this currently can only be applied to Unity we see this concept also being applied to Unreal PCG.
At this stage, we will condense all the features, techniques, and models into a stable and agnostic cloud platform that provides all the core functionality cross-platform combined with light SDKs and integrations that take advantage of the workflows, rendering and deployment of each game engine and in the future go beyond Unity or Unreal such as Godot, Three.JS, and Babylon.JS.
We also are planning a cloud Render API strategy for Unity and Unreal SDK. This will allow our API to become part of any game production pipeline for any studio. We have many Unreal opportunities in the pipeline, and if some of those materialize, we will prioritize the Unreal SDK, which is scheduled for Q2 2024.
Creating 3.5 Billion Different Levels With 600 Assets
To understand how the GRS accomplishes such a feat, we need a deeper dive into how Moonlander works. Three main components come together to generate a world:
- At the heart of our Generative Reasoning System (GRS) lies a procedural system powered by a 64-bit seed, capable of generating up to 18 quintillion variations. However, this system shares a common limitation with other procedural systems: the sheer volume of potential variations doesn't guarantee distinctiveness. Many of these variations might be too similar to each other or lack the coherence necessary to be considered unique and usable levels.
- Built upon the procedural system is a layered structure of generators, each designed for specific world-building tasks such as terrain generation, tree planting, rock placement, building creation, weather configuration, and defining rendering styles. All these generators function in tandem with the core procedural engine, contributing to a shared data pool that facilitates intercommunication. This setup enables varied combinations of generators to produce dramatically different outcomes, even when using the same seed. Consequently, this multi-generator approach exponentially expands the range of possibilities, pushing the potential variations towards a nearly infinite count.
- At the apex of our system is the machine learning orchestration layer. This advanced component harnesses all available generators and assets to formulate and fine-tune generator stacks in response to user inputs such as text prompts. It persistently strives to strike a balance between diversity and coherence, aiming to produce the greatest number of worlds that align with user requests. This approach introduces a dynamic method for arranging assets in unique configurations at various levels, thereby contributing to the generation of billions of distinct variants.
So, the amount of truly different levels that the tool can generate depends on many things, and everything we generate we store in a digital DNA. The number presented is based on a simulation, tests of the tool, and our currently limited set of assets and generators. But the true potential of the tool is still to be found by our awesome early adopters.
The Roadmap
Presently, we are planning our closed Alpha 5 to be released on the 1st of February, with Alpha version 6 scheduled for release three weeks later. Our public Beta launch is targeted for early March. Following this, we'll shift our focus to developing the Cloud API and the Unreal SDK, enabling seamless integration of our API into any production pipeline, including those using Unreal. For those eager to get an early glimpse, we offer the option to sign up through the Unity Asset Store or our website, where early access can be granted.