Didimo's CTO João Orvalho spoke about the company's new AI-powered character generator and editor for game developers and character artists, shared what the biggest challenges in terms of AI-results that are generated are, and discussed what he sees as the future growth potential for the tool.
Introduction
I am João Orvalho, CTO at Didimo. I bring 20+ years of programming, engineering, and international team management experience in delivering highly complex technical solutions as diverse as managing national election voting system security to computer visualization solutions. I earned a Master’s in Networks & Communication Systems and an Engineering degree in Electrical & Computing Engineering from Faculdade de Engenharia da Universidade do Porto.
Didimo is a leading creator of high-fidelity digital human avatars and AI-generated game characters. To date, we have earned 4 patents for our robust platform that can turn a 2D photo into a full 3D digital human twin or can leverage input prompts and editing tools to build humans or any creature imaginable for use in games, metaverses, and immersive experiences. We can generate hundreds of customized 3D characters in minutes, so we can help companies generate at speed and scale in real time.
Didimo's AI-Powered Character Generator and Editor
We are preparing to launch our latest product, an AI-powered character generator and editor for game developers and character artists, in April. This new tool will accelerate game production with the ability to rapidly create hundreds of unique game characters in a fraction of the usual time and cost. What normally takes painstaking weeks of tedious work and great cost, this tool will enable the easy mass population of an infinite variety of diverse non-playable characters (NPCs) resulting in much richer, more captivating game experiences for players.
Our tool builds on Didimo’s robust avatar creation pipeline, making 3D game character creation exceptionally quick while still allowing character artists and game developers full creative control over design. This was critical to us… to empower artists and give them more possibilities to realize their vision. Soon artists will be able to create animatable, diverse 3D characters in seconds through simple text prompts.
From there, artists can fine-tune characters for the game's look and style choosing from comprehensive character creator options or ingesting template characters from the game to direct style choices. Through an easy-to-use interface, our tool allows batch creation and editing, empowering artists to command hundreds of characters in groups to fill worlds, levels, or specific experiences and with the unusual ability to randomize traits across groups. And the last critical component is a unique way the characters can be served to protect game memory budgets while allowing dramatically great quantities of characters to be used.
We see this as a game-changer. Now, not only can games truly have more diverse and performant characters but designers can leverage more characters for specific moments or experiences and can even have those characters be relevant to the situation of the actual player. For example, if it's raining where the game player is located, the game could call up the version of each character that includes rain jackets, umbrellas, or rain boots. This opens up new ways to think of game design and helps designers achieve more of their vision.
So where does AI fit in? Actually across that entire chain. In original character generation, we are able to leverage AI across our years-long and broad library of character morphology to generate exactly the base character a designer wants, such as a middle-aged Chinese woman with long hair who comes from the countryside.
Next, we have several instances of AI in our editing tool that make manual tasks faster and allow artists to manage edits across groups, subsets, or individual characters all at once, including full randomization or directional design of characters.
For example, an artist can take a group of 100 characters and immediately alter them to make that group evenly split across male/female, old/adult/young, tall/medium/short, a diverse range of ethnicities, and so on. Or they can transform them into orcs, ogres, and other types of characters. The possibilities are endless but the processing of those changes is nearly instantaneous.
Finally, in serving the character files, we have a patented approach for serving characters at the right moment and only pulling forward the necessary different data for that instance so that more characters can be planned within games without breaking memory constraints.
The best part of all of this is that our AI tools are directed and empowering. They deliver on the creative vision and needs of the artists using and directing the tool rather than forcing them to accept formatted characters that do not respect their game aesthetic or specifications.
Main Challenges in Generating AI-Results
We are currently researching and developing around the following challenges:
- Human face modeling
The goal is to automatically generate unseen, yet plausible, human head shapes conditionally on both generic descriptors (gender, age, ethnicity) and specific shape characteristics of the eyes, nose, jawline, etc. as well as skin details such as skin color, pores, wrinkles, freckles and so on. For instance, create a 60-year-old East-Asian male with a thin face, square jawline, and pockmarked cheeks. To achieve this, we create statistical models of both head shape and head textures that leverage a variety of real human 3D scans by making use of advanced machine learning and computer vision methods.
- Animation retargeting
Any character that we create is fully animatable and we need to make sure that it animates realistically. This involves transferring animations created by artists on a template character to any new character. The challenge is to adjust the template rig for the shape differences between the template and the new character, a problem known as animation retargeting in computer graphics. Again, we use advanced AI methods to make this process fully automated, fast, and accurate.
- Advanced stylization
Key to every new game is being able to define a specific style aligned with the game designer and art director’s vision. We are working on advanced stylization methods that allow us to generate variations of characters within a game style while still maintaining the uniqueness of each character.
Future Growth for the Tool
We have a robust roadmap to add additional features and to broaden how games can leverage this technology. From what I can share now since we can ingest a game’s specific aesthetic style to allow base characters to start in a game style, our tool will grow in the ability to deliver an increasingly broad array of types and nuances.
We will also have a player-facing module soon, meaning that game designers will soon be able to allow their players to tailor specific characters within the game, making the playing experience even more personalized.
We are also working hand-in-hand with game publishing partners to define new features they need for upcoming titles. So I’m sure this will unlock additional feature requests that different genres require.
What we are helping to do is unlock whole new ways games can be designed so that gameplay becomes even richer and more engaging and that we can create delight every time a gamer plays. We believe in stories and that games tell the best stories available. And now, any story is possible, so we look forward to how this technology can remove rote tasks and free designers to create types of gameplay and stories we have yet to see.