Pavel Šafář, Project Lead on the recently announced Enfusion Engine by Bohemia Interactive, the developer of Arma and DayZ, has told us about the development process behind the engine, talked about its features, and shared some exclusive screenshots, showing the engine's lighting capabilities.
In case you missed it
You may find these articles interesting
Introduction
80.lv: Please introduce yourself and your team. How did you join the Bohemia team?
Pavel Šafář: My name is Pavel Šafář and I am a Project Lead on the Enfusion project, which is also the name of the proprietary game engine we’re developing at Bohemia Interactive. I joined this team in 2017, but my career at BI started in 2016 as a Game Programmer for Take On Mars.
When I joined Enfusion, the team consisted of about 5 people, though not all of them were able to work on it full time since we were also busy delivering certain aspects of Enfusion as part of DayZ. I didn’t start as a Project Lead back then but as a Programmer. That changed in 2018. Now, in 2022, there are around 30 people who work on or contribute to the Enfusion engine on a regular basis, so the team has grown significantly. Since we want Enfusion to power the next generation of our multiplayer and persistent games, we added backend developers to our team in 2020.
I wasn’t a professional Game Developer before joining BI; my previous field was business IT. Playing video games and programming is my lifelong hobby, so I had plenty of reasons to get into the industry. It was quite a career twist, but I think it’s one of the best decisions I have ever made.
Enfusion Engine
80.lv: How did the story of Enfusion begin? What was the original plan? What goals did you have?
Pavel Šafář: The earliest mention of Enfusion dates back to 2014 when our CEO decided that Bohemia needed to develop a powerful and flexible game engine. But it took a few years before development actually started in earnest. I believe we split the engine source code from DayZ in 2018. Bohemia was starting to show significant growth and people began working on Enfusion full time.
We didn’t have any specific goals. Bohemia Interactive has over 20 years of experience making games and game engines, so our approach was iterative and based on discussions with experienced people throughout the company. We talked a lot with our Creative Director, Ivan Buchta, who told us which features he wanted to be implemented in our subsequent games and we developed the technology accordingly.
We didn’t start from scratch. We reused a lot from our Enforce and RV engines. Enfusion is kind of a mashup of the two, though many things have been rewritten or upgraded. We also deleted thousands of lines of code knowing that we’d need to write that code again, but better or different. But you can’t develop an engine forever; you must have a game project in mind. Our first project was to “port” the beautiful island of Tanoa from Arma 3 into Enfusion and run it on PlayStation 4. Tanoa was chosen because it was big enough and included a lot of entities, so it was a good test for the engine. We did other small internal projects on top of Enfusion back then, including one prototype which was released on Steam with the code name Project Lucie. Fun fact – I was the one integrating the Oculus Rift headset and controllers with Enfusion.
The Engine's Architecture
80.lv: Could you tell us about the architecture? What is the core of the new engine? How is it organized?
Pavel Šafář: We want to develop multiple game titles on top of Enfusion, but we need to do it right, and for that, we need good engine architecture. It’s something we care about a lot and is constantly being looked after by our Lead Engine Programmer. We simply can’t allow the introduction of game-specific features into the core of the engine; we are keeping these two things separate. There is some “overhead”, but it’s needed for long-term sustainability.
The engine is mostly written in C/C++. We are quite conservative in terms of the latest language features and we also pay a great deal of attention to which libraries we’re using and introducing to Enfusion. Since the engine must be multiplatform, we always need to keep this in mind when choosing a library or framework to use. Currently, we support Windows on PC, Xbox, and PlayStation. We also support Linux, but only for our dedicated servers. We try to keep the amount of platform-specific code as low as possible. Anything that can run in parallel runs that way. Our games are big in terms of size, simulation, and data, so our engine needs to be heavily optimized.
One of the key features we really wanted to have and improve upon in Enfusion is modding, including support in our tools. This is an important aspect and one that influenced the architecture a lot. Another aspect we considered is that Enfusion should be able to drive big and dynamic multiplayer games, even on mid-tier hardware. We can’t bake many things into the data, but we can compute or refresh most of them during runtime.
We want Enfusion to be powerful. Not just for us C++ developers, but for technical designers/scripters, as well as all the content creators in our community. Therefore, our scripting language Enforce and our feature-rich script editor are important parts of the architecture. They’re fast and offer many possibilities, which is why some rather important game parts can and are developed with the Enforce script.
I already mentioned the editor. We call it the Workbench, and it can be seen as an IDE, where you find everything you need to develop a game or game content on top of Enfusion. Our motto is “No more text editors. Everything must be possible in the Workbench”.
80.lv: How difficult is it to master for new artists joining your team? How would you describe it if someone asked you to compare it with other available solutions?
Pavel Šafář: There are many kinds of artists on our game teams. Let’s take those people who make models, for example (meshes, textures, materials). Surprisingly, Enfusion doesn’t enforce any specific knowledge or workflow model an artist wouldn’t know already. Our model import pipeline is comparable to other game engines, so even “non-Enfusion” models can be imported and you should be able to see the results. This applies to simple and usually static objects. For dynamic ones, you need to have a lot of additional information in the data. That’s why we have guidelines and documents which we will make available, so everybody knows how to make a properly working asset for our game. Also, our technical artists are working hard to make these assets compatible with Blender, so it’ll be possible to preview a model with textures and materials in Blender before importing it into Enfusion. This will speed up the modeling process significantly.
Then we have audio artists. Thanks to our powerful, data-driven audio editor, they can design how the sound should be “mixed” under many different circumstances. Again, no configs – all the power goes to the audio designer. A new audio designer shouldn’t find it difficult to get into sounds and audio effects for Enfusion. On the features level, I believe we have all the important audio tools available, so they won’t have to resort to using third-party middleware.
There are also Animators and/or VFX Artists to consider. Both will find powerful and easy-to-use editors available to make their desired effects. Bohemia Interactive has its own MoCap studio to author animations, so we use Motion Builder before importing them to Enfusion. It’s 3rd party software, and it’s not free, so we did the same thing we did with modeling. Which is to say that Enfusion supports authoring animations in Blender, even when plugins like Blender Rigify are used. Basically, you do not need to use any paid software to prepare game data for Enfusion.
Generally speaking, I think people who have experience with other game engines will find our data pipelines to be quite standard.
The Engine's Possibilities
80.lv: You mentioned the engine allows creating believable environments. What are the elements of this believability? What new possibilities does the tech provide?
Pavel Šafář: Quite a few things, actually. The first is size. We want our world to be big and open with no artificial corridors. The second is world simulation; mainly the day/night cycle and weather. For this, we need to have very accurate atmosphere simulations, including clouds. Then there’s the number of objects. Our forests are very dense, with roughly a million tree entities per world. Another element is good-looking grass clutter, be it 2D or 3D. We also want our players to see far into the distance. This poses challenges as shadows from objects are usually rendered in terms of hundreds of meters. We want wet materials when it’s raining and we want them drying when the sun is out, not to mention reflections. When you set a new world up, you must give its correct position because we simulate sun travel, as well as stars. If you look at the sky at night, you see it as if you were physically there. We want you to be able to navigate the environment via the stars in our games, or at least have the moon serving as an indirect light source that can guide you at night. We want different lighting indoors and outdoors. We want to have lakes, rivers, big rocks, roads, dirt roads, powerlines, and many other audio/visual elements which will give you the feeling that you’re inside our worlds. Many nice things are already possible, but we still have a full backlog of features that we’d like to include.
80.lv: How does the tech deal with a large-scale open world? How does it treat vegetation, complex geometry, and massive landscapes?
Pavel Šafář: We’re using dynamic LODs for our terrain, as well as ocean simulation. Level of Detail is a very common optimization. We do not animate many bones on models where you can’t see the result; we blend between an object’s LODs. Many models, as well as terrain features, have occluders that inform our render as to whether the geometry behind it needs to be rendered or discarded. We also stream stuff in and out in multiplayer in order to keep data traffic low and server performance at an acceptable level. Since you can’t fit all the world data into memory, we stream it (i.e. textures, audio, meshes).
We use a similar approach to vegetation as we did in Arma 3, though we improved upon it a lot. Trees and bushes are regular objects and the Level Of Detail system applies to them. Grass clutter is rendered around the camera into a certain distance which is configurable, and we made sure that it blends smoothly into the terrain.
Anything that could only be simulated on the GPU side, was. Sadly, we usually need things to be computed on the CPU as many graphical effects need to have other properties like sound. Otherwise, they affect physics or need to be replicated in multiplayer. Doing technology for our games is not a simple endeavor.
Lighting
80.lv: Could you also discuss the engine’s tools for lightings, skyboxes, wind, and other weather effects? Does it support complex fog effects?
Pavel Šafář: Skybox might be nice in a static scene, but it can’t be used in our case, at least not for dynamic things in the sky. Our sky and atmosphere must be dynamic. We invested a lot of effort into physically simulating the sky as closely as possible. We removed the cloud rendering technology we used in DayZ and wrote our own. It took some time, but we’re satisfied with the results like detail, render distance, shadows, etc. We completely rewrote the renderer; it is no longer DX11 on PC as it is in DayZ. Instead, the rendering backend is written in DX12, so we can heavily optimize. We also changed the lighting model to PBR. We still support old RV materials, but they do not look good in our new lighting model.
The renderer and shaders were greatly improved in recent years and we’re proud of the overall result, more so than any particular feature. We are now at the point where all the rendering features are starting to fit together and we can see some nice results on the screenshots. This fills us with joy.
The Multiplayer
80.lv: Please tell us about the multiplayer aspect. How was it designed for these particular needs?
Pavel Šafář: The state of the world must be synchronized or simulated deterministically (i.e. clouds). It would be a real disadvantage, for example, if player A is under a shadow and should be less visible due to the shadow from the clouds, but other players can see him being lit by the sun. On the other hand, these kinds of things are not the heaviest in terms of multiplayer synchronization, considering the hundreds/thousands of other replicated entities we usually have in our scenes.
We decided a long time ago that our multiplayer architecture would be an authoritative server. Everything related to the state of the game is simulated and verified by the server. Clients provide inputs and sync with the server and interpolate the state while getting data from the server. For example, the client can’t change the time of the day as it is dictated by the server. In some cases, we also force the view distance, so stronger HW isn’t such a competitive advantage in multiplayer scenarios.
I think the most important aspect that will impact content creators is that we have implemented our replication layer, which doesn’t care about client and server but works with ownership instead. This way we, as well as modders, are able to write code and scripts that will work in multiplayer, as well as single-player scenarios. No more conditions about what should happen if you are a server or a client. This gives modders a lot of power because they will be able to write full multiplayer game systems. They will, of course, need knowledge specific to scripting in a multiplayer execution environment.
Modularity
80.lv: Could you discuss modularity? How can it be modified? Also, what tools did you develop to attract creators?
Pavel Šafář: There are several layers regarding the modding in Enfusion. A few years ago, we implemented prefabs. It is quite a generic thing nowadays, but our entities in prefabs can form complex structures. Prefabs themselves support hierarchies and inheritances on top. This gives you great power in how to organize our data. We even support prefabs of components and their inheritance, which is not that common in other game engines. And yes, even with prefabs, you can still decide that your particular instance will have some local changes.
Then we have configs, which are well known to the Arma community. Our configs offer the same as stated above. You can inherit configs and you can have configs in configs. Data types are defined in the script, so the engine can check if your config is valid. You also have a proper UI widget while editing the config in the editor.
Then there’s modding capability. You can inherit a config or prefab in your mod and continue from there. Or you can just override some part of it. We do not replace the whole prefab, we do merges. So you can have multiple mods changing a single config, for example.
All the above can be done in the editor, which is probably the biggest value add for content creators. They can easily mod a config. When they change a property, we display what’s been changed. Changes can be reverted (i.e. you can go back to the original file, etc.). Many features have been implemented directly into our editors to support this.
Additionally, the whole packing and publishing process is handled by the Workbench. You do not need to leave our editor to send your mod to a workshop where players can download it.
Future Plans
80.lv: What is your current roadmap? What new features would you like to add? Are there some bottlenecks you’re trying to deal with? What is your plan for the next year?
Pavel Šafář: Our current plan is to release a game demo that will demonstrate the engine’s capabilities. I also assume that we’ll be supporting it throughout the year. I don’t mean just fixing any issues people will find. We expect people will start playing with our engine and editor, which will surely result in many things we’ll need to deal with, including issues, questions, and new features to document.
Game engine development is a never-ending story and we already have many features being requested from our internal teams. We know about current bottlenecks that we’d like to address and we also have a few ideas on how to push Enfusion forward. But I don’t expect us to add something truly groundbreaking this year. If we do, we’ll certainly let you know.