Enemies: The Working Process Behind Unity's New Tech Demo

Unity's Natalya Tatarchuk and Veselin Efremov have shared an exclusive behind-the-scenes look at Enemies, the latest project by Unity’s Demo Team that showcases the engine's capabilities for powering high-end visuals.

Unity's Mission

80.lv: With all the recent news and acquisitions, how would you describe the mission of Unity today? What drives the team?

Our goal is to meet the artists where they are, building the workflows to connect them whether they are working in Maya or Houdini or Blender, or any of the pipelines (VFX or real-time-focused), letting the artists take the workflows that they like and enabling them to be more successful with our suite of tools smoothly interloping with their favorite tools and workflows. We are also focused on making sure that the artists never have to start from a blank canvas. That is why we are creating a rich content library, combining these tools with a rich asset offering and a pipeline to deliver content to any destination.

Ultimately, what drives our team is to help make artists into super-heroes, by giving them the best tools for their tool belt, whether it is incredibly powerful tools for character creation and deformation like Ziva, or tools for modeling vegetation and organics like SpeedTree, or the power of the whole pipeline like the tools from Wētā Digital, and help make it easy for the content created in all these tools to flow smoothly into engines like Unity. 

Using Unity in Filmmaking

80.lv: People mostly consider Unity a toolkit for game developers but with all the recent advancements, I think, the software is ready to step into other fields like filmmaking and metaverses. What is your take?

Although Unity has its roots in game development, the platform has been adopted by many industries creating real-time 3D applications. At Unity, we believe the world is a better place with more creators in it, and a creator should not be limited by the scope of what a tool is meant to do for them. From projects like the Enemies, The Heretic, to virtual stage productions MPC’s Lion King done with Unity, to Netflix’s The Adam Project or the Umbrella Academy, or Apple’s Ted Lasso, to award-winning productions like Sonder, to fully virtualized digital car factories from Volkswagen to immersive projects like The Changing Same which help explore Black history and healing, to immersive concerts by Insomniac Events done with Unity, Unity have been right there with our users, no matter what their creative vision is.

And now, with the power of best-in-class, fast character creation through Ziva Dynamics, to amazing workflows for modeling vegetation and organics with SpeedTree, with the incredible tools from Wētā Digital from their hair grooming suite Barbershop, to their environment building workflows through CityBuilder and Scenic Designer, to the final frame rendering through the advanced spectral renderer Manuka, we will give more power and more options to help make content creation accessible to all creators, no matter what their ambition is or no matter what their vision is. 

Enemies Tech Demo

80.lv: How did you get started with the Enemies demo? What were your goals here?

After we made our previous demo, The Heretic, we released a tech package called Digital Human Package. It contained all the technology we developed for The Heretic project, as well as the character asset. It was obvious to us, at that point, that we have a lot more opportunities for improvement, that would allow us to take the quality of our digital humans to the next level.

We had validated the 4D pipeline, but both acquisition and processing had improved in pace with our project development. Our internal intentions with the tech we laid ground to were also quite clear, we had many ideas about where it should go next, including the obvious need to develop a solution for hair. So in some way, this project begged for itself, the work was cut out for us and we dived right in.

On the IP side, Enemies was written as a longer cinematic, in which we intended to scope up the work quite ambitiously – with two characters on screen at the same time, talking, engaged in a conversation with each other. This setup opens up so many more challenges and new problems to solve, that it was very tempting to do – breaking new ground is the main mission of our team at Unity. As we were going through pre-production and planning, we realized that breaking it down into smaller chunks would be the more responsible thing to do. So Veselin Efremov, the team’s Creative Director, wrote the Enemies teaser as a smaller piece of this project, which contains one of the characters and is indicative of the themes and mood of the piece, and this is what we are releasing today. With Unity’s recent acquisition of Ziva Dynamics, we have high hopes that it will soon become possible to produce the larger project as it was originally intended because Ziva’s technology can enable us to produce a much larger amount of realistic facial performances.

The Production Process

80.lv: Could you tell us about the production? How much time did it take to finish the demo? What was the process? How did you combine different workflows?

Discarding the pre-production for the full cinematic demo, we still spent over a year working on this teaser alone. A lot of time went into preparing the technology and evolving it alongside the production. We also collaborated with Unity’s engineering team, to ensure that the hair system is consistent and the workflow is validated and smooth from authoring through import and simulation in Unity, to shading and rendering.

In parallel to the tech development, the content production was also taking place, with all the usual moments – we cast our actress, rehearsed the part, and prototyped the direction by recording her on video first. When we had validated what we intended to produce, we proceeded by replicating the same scanning pipeline we used for The Heretic: 3D scan at 4DMax studio, 4D scan at Infinite Realities, and data processing at R3DS. Additionally, for the body and hands motion, we used a studio in Paris called MocapLab that specializes in hand animation. 

The hair was created entirely separately. We worked with an artist who specializes in hair grooms for the VFX industry. We had more than one groom and variations created, which allowed us to experiment a bit more widely during the development of the hair system. A solution for hair should be able to handle a wide variety of hair types and styles. We will continue to work on it.

A very large amount of the work fell on our 3D artist Plamen Tamnev, who covered 90% of the art done for this project. He was working on all of the character art and the hair setups, as well as on the dress, which he created in Marvelous Designer, and the whole environment, from design to execution.

Advancements in Digital Human Creation

80.lv: Could you talk about some of the most important technological advancements in digital human creation including moving the Skin Attachment system to the GPU?

First of all, the entire feature set of the High Definition Render Pipeline (HDRP) has been evolved in the Unity 2022 release, adding significant quality improvements and performance optimizations to make features robust for real-world, scalable productions. A number of new systems have been added to the feature set, including support for Shader Graph motion vectors, adaptive probe volumes for global illumination, major improvements to SSGI and screen-space reflections, and more. All are leveraged in the Enemies demo to raise its visual quality higher than ever. The demo makes use of real-time ray tracing and Unity’s native support for NVIDIA DLSS (Deep Learning Super Sampling), which allows it to run at 4K with image quality that’s comparable to native resolution. All of these solutions and technologies are already available to Unity developers.

In addition, we developed an all-new strand-based Hair system for Unity. It allows creators to make their hair grooms in any authoring tool of their choice (we used Maya XGen), import it into Unity, and have it attached and simulated there in real-time. Then it is possible to apply any kind of shading depending on the rendering pipeline that is used in the project. We will release this system to the community within the coming month or two. Specifically for the goal of realistic-looking hair, Unity now also has dedicated hair shading as part of HDRP, which is already available.

We are also preparing an upgraded version of our Digital Human Package I mentioned above, where we will include all the new developments we made specifically for Enemies: the additions of caustics for the eyes, the introduction of tension tech which handles blood flow and wrinkle maps (thus eliminating the need for a facial rig), skin shader, and the Skin Attachment system moved to the GPU, which enables us to have peach fuzz. 

Current Limitations

80.lv: It’s getting so much easier to generate realistic digital humans. What are the main limitations at this point for smaller-scale teams and individuals? How do you plan to deal with this line between big studios and individual creators?

With the latest developments for the graphics technology and the engine capabilities in the Unity engine, many of the obstacles that existed previously for realistic digital humans creation are falling away. However, even in the highest quality productions that have recently shipped in the AAA space, believable, realistic characters are extremely complex to produce, require significant expertise from dedicated teams, and a great deal of time to create. Furthermore, because of the complexity of many of the current pipeline stages (for example, rigging setup for character puppets, etc.), developers and creatives don’t have the ability to iterate on the character design with a tight loop between the game or story design and the character setup. This limits what stories they are able to tell and how well the players can interact with these characters. With the current complex, clunky, fragile, bespoke pipeline with tens of custom steps and varied tools that all need to be hooked up together, a single high-quality character can cost upwards of 4 months for a team of 12 to create! Simply put, character creation is one of the costliest aspects of game creation. Yet it’s the one we need the most. And if this is a challenge for large studios, it’s an incredible burden for small indie creators. 

What we are focused on building to solve this set of issues at Unity is a major simplification of the character creation pipeline, with accessible tools like ZivaFaces, ZivaRTTrainer, and many others, that take that lengthy creation time of weeks if not months, per character head, to a mere hour, with a button click on the cloud (for ZivaFaces, for example, to create a fully rigged face puppet). These novel character creation pipeline tools aim to remove the friction that exists in the character creation pipeline, and not only accelerate the creation process, but also make it more accessible to a wider range of creators. Another aspect that we are delivering is to be able to make it easy to bring believable character performances at the highest possible quality in a scalable way, across a wide range of platforms and devices that Unity can be experienced on. We are innovating on building an accessible pipeline for easy performance capture whether from a high-end head-mounted camera, or from a mobile phone, to drive a believable high-quality character performance. And with Wētā Digital’s tools, we are innovating on the entire streamlined pipeline for character creation to make this accessible and easy for creators of all levels and skills to create high-quality results. Imagine game characters that will blow your players’ minds, digital stand-ins on the latest virtual production set that will hold up to close-up shots, virtual participants in immersive training scenarios that are more believable than ever before: the possibilities for creators are limitless! And that’s just the beginning. 

Stylized Art

80.lv: Fellow artists know that creating something realistic gets easier while stylized art is still a costly job when it comes to time. Did you consider finding ways to automate stylized art, too? What are your thoughts?

The hair tech we have developed was made to support any artistic style, no matter whether realistic or stylized in terms of groom, simulation/animation, and shading. 

For Unity in general, our mission is to enable all kinds of creators to achieve any kind of real-time 3D projects, this also means any artistic vision and stylization. 

Conclusion

Unity will be attending GDC and if anyone else is visiting we invite them to check out Enemies running in real-time and have a look at it directly in the Unity Editor, it is being displayed at the Unity booth at GDC (23-25 March).

We are also delivering several presentations that dive deeper into the various technologies used, including a globally streamed Twitch session.

Natalya Tatarchuk, Unity’s VP of Graphics, and Veselin Efremov, Principal Creative Director for Unity Technologies' Demo Team

Interview conducted by Arti Burton

Join discussion

Comments 0

    You might also like

    We need your consent

    We use cookies on this website to make your browsing experience better. By using the site you agree to our use of cookies.Learn more