This post is dedicated to brave developers and small independent teams creating custom technological solutions for game development.
Today we have a number of powerful engines available for free: Unreal Engine, Unity, Lumberyard, CryEngine, and other solutions. These suites provide complete sets of tools that let developers save time on developing tools. Let's look at the Unity numbers, for example. The team states that applications developed with Unity reach nearly 3 billion devices worldwide, and were installed 24 billion times in the last 12 months. Developers worldwide use the engine in architecture, automotive, construction, engineering, film, games, and more.
As for Unreal Engine, it was used to create one of the world's most successful titles, Fortnite. In 2019, Fortnite grossed over $1.8 billion in sales (according to Engadget) which is more than any single-year sales total in videogame history. And that's just one of many examples.
A blast from the past: a quick two-minute video show showing the possibilities of the original Unreal Engine (used for Unreal Tournament '99):
Yet some developers still find reasons to develop custom engines on their own and we're not just talking about Guerrilla Games or Rockstar. Solo developers and indie teams often come up with solutions that can deal with the limitations of current solutions. Isn't the process too complicated for a small team? We've decided to contact a couple of teams developing their customs engines to learn about their reasons, challenges, time costs, and more.
Custom Engine from Jacopo Ortolani
We've recently published an interview with Jacopo Ortolani about his latest character. The most curious thing is that the presentation was set up with the help of a custom engine the artist has been developing along the way. The artist was kind enough to discuss developing his custom solution and explain his reasons.
Why did you decide to set up a custom engine for the presentation? Did you face some limitations when working with existing solutions?
The idea of writing a custom engine (that doesn't yet have a proper name, therefore I will just keep calling it "the engine") came to me while experimenting with Unreal Engine 4. That was back in 2015 and UE4 had some severe limitations related to the rendering of skin, eyes, and hair; preventing me from getting the results that I wanted. All of those things have been fixed or greatly improved shortly after. However, by that time, I was already getting started with my quest to write my own game engine and I figured it was worth pushing forward with it. I was having fun anyway, so there is my excuse.
One of the things that I couldn’t understand at the time was the fact that some of those problems with UE4 were not present in the previous iteration of the same engine. So, I decided to seek answers. That's how everything started.
While the idea of writing an engine was growing in my mind, I knew that 'game engine architecture' is an incredibly complex topic. But, when approaching something incredibly complex, one has to wonder “how complex can it be?” If people are doing this kind of stuff and I wasn’t the first one, as complex as it might be, it must be doable. So, that was my mindset while starting.
Could you talk about the core of your engine? How does it work?
When seeing the pretty pictures I could get out of my engine, one might be tempted to think that such an engine must be relatively advanced and, to some degree, it might even be capable of competing with the industry heavyweights. The truth is a little bit less exciting than that though. At this time, there’s not much to talk about in terms of features: the poor thing doesn’t even have any GUI yet. However, there’s an argument to be made in terms of advantages over existing solutions: by writing a custom engine, one has full control over how everything works and looks. So, whenever I don’t like the look of something, I can simply study a better solution for that. An example of this is the way shadows are rendered: when researching shadow rendering, the very first and most naïve thing one would most likely come across is a technique called 'depth map shadows'. This technique is pretty simple to understand and to implement but it’s hard to get nice looking results out of it. So, it’s only natural not to get content with it and research for fancier solutions.
The next technique that I was interested in is called 'exponential shadow maps'. This technique has a small performance cost and it allows for soft shadows. However, I soon realized that there was a catch in the shape of ugly shadow artifacts. There was no easy workaround for that, so I binned it in favor of another technique called 'moment shadow maps'. This is what I’m currently using and I’m satisfied with how it looks for the time being.
One area in which my engine has an advantage over many existing solutions is what is called “order-independent transparency” (abbreviated as 'OIT') that is needed to render semitransparent surfaces and hair.
To that end, most modern engines use a very smart "cheat" that I would call 'dithered alpha cut + temporal anti-aliasing' (I don’t really know if that’s an official name for the technique but it’s pretty descriptive). The reason for doing that is the fact that such a technique has a neglectable impact on performance and it works together with the rest of the rendering pipeline by default. However, it has a tendency to look rough. That’s the reason why it’s pretty hard to get nice and soft looking hair in most modern game engines.
The technique I’m using, on the other hand, is called 'stochastic transparency', and it’s way better at blending semi-transparent surfaces with each other. The downside of it (and the downside of all proper OIT techniques) is that it requires a dedicated branch in the rendering pipeline and its performance cost can be felt pretty severely in some cases.
The point is though, by creating a custom solution, one gets to make that choice which would otherwise be neglected.
Please tell us about your time costs. How long did it take you to figure out the code? What are your plans?
I started developing my engine in late 2015 and it took me three years to get it to where it is now. However, that was never a full-time job. I was always working on it in my spare time while doing more things at once.
Currently, I’m not doing any programming or personal projects ‘cause, as a commuter and having to work full time, I barely have any free time at all. It’s very hard for me to tell if is there a future in which this engine will grow into something that can actually be used to make games: that really depends on too many variables in my personal and professional life on which I don’t really have much control. One can have many dreams at once but there’s only so much that can be turned into reality.
With that said, there’s still plenty of things that I would love to experiment with.
If I ever go back working on my engine, one thing I’d like to improve is the handling of PBR and, more specifically, the handling of metallic surfaces. Right now I’m not satisfied at all with how metals are rendered.
Then I would love to study a proper streaming system. Currently, each and every texture gets fully loaded prior to displaying anything on the screen. That’s fine for presentation purposes and small projects, but it would be pretty much impossible to make anything much bigger in scope. And then I would like to explore some fancy effects, such as depth of field, and some different OIT techniques.
Jacopo Ortolani, 3D Artist
Make sure to read the full breakdown here to learn more about Jacopo's approach to character art, modeling workflow, retopology, and more.
Developing daScript
We've also contacted Boris Batkin, the developer of daScript. It is a high-level, statically strong typed scripting language. The goal was to develop a fast solution as embeddable ‘scripting’ language for C++ performance-critical applications like games. Let's learn more about daScript and how it can help improve performance.
Could you tell us about daScript? Why and when did you start working on it?
From very early on in my programming career I wanted to make my own language. In fact, my first job in the videogame industry back in 2000 was to write a cross compiler from the UnrealScript to C++. I’ve been thinking of one on and off ever since.
I have some early drafts and proofs of concept from as early as 2005. Back then I’ve worked at Naughty Dog and they’ve been raving on and off about their custom lisp-like language “Goal”, which they had to abandon during the PS2->PS3 transition. There was a lot of productive discussions which gave me some ideas.
It was painfully obvious even then, that it has to be an embedded language. Most game companies use a subset of C++ (a different one for a different company). Whichever ‘script’ we adopt has to play nice with the existing codebase. If you think of C++ as a platform, game-specific language has to be built on top of it.
Similar things have happened to other echo-systems. Java got Scala, and later Kotlin. The browser became a viable platform, with multiple languages sitting on top of JavaScript. .NET started with C#, but F# is very viable. daScript is on a similar trajectory with C++.
The game industry needed a more practical solution for a while now. C++ is not an ideal programming language, to say the least. It's verbose. It has a complicated learning curve. Multiplatform C++ programming is reminiscent of traversing a minefield. Things tend to explode. However, It does allow very robust code, and that's why we use it. Having a fast-iterating straightforward solution is а very appealing proposition.
Witness the evolution of ID Tech Engine (1996-2018) with this video from GameForest:
Embedded languages that were typically used in the video-game industry are dynamically typed with expensive interop. Lua (with LuaJIT), Squirrel, DukTape/QuickJS (JS) are to name a few. In most of the embedded languages, better performance is usually achieved with just-in-time (JIT) and JIT isn't working at all on most of the closed platforms (such as consoles or iOS). Data marshaling becomes a performance problem very quickly, even with JIT enabled. Things are typically good enough for event-driven message-based programming, and that's about it. It does not scale. As a result, the bulk of the heavy lifting is either done in C++ from the get-go, or prototyped in the script and then rewritten.
There were several statically typed embedded solutions. UnrealScript was probably the most notable example, with a good history of success. However, those demonstrated scalability issues pretty quickly. There often comes that moment, where “rewriting slow sections to C++” becomes a goto solution for everything related to script performance. At the end, Unreal went to BluePrint, which has its own set of tradeoffs.
C++-style OOP is a big hurdle on the road to good performance. Most languages are designed to support that or similar programming model. However, a more data-oriented approach is necessary these days. A lot of places shifting to things like ECS frameworks, with a bunch of infrastructures to follow. Unity went with “Burst” compiler on top of a subset of C# to address some of these problems. Jonathan Blow developed whole new non-embedded language JAI to deal with the same set of issues. Languages designed to deal with that from the get-go should do well - and daScript is to take things a step further in that direction.
When my friend Anton Yudintsev from Gaijin Entertainment approached me in 2018 regarding making custom language for their ECS framework, I was sold. The request was specific enough to make it interesting; general-purpose big idea language is more of an academic exercise - very hard to accomplish, and even harder to adopt.
Unreal Engine's evolution might be even more impressive. Another great video from GameForest:
What are the advantages of using daScript? How can it be used when developing games? How does it affect performance?
daScript is fast. Its interop is dirt-cheap. It has a really fast interpreter. It runs at native C++ speed when compiled AOT (ahead-of-time). Basically you never have to rewrite anything from daScript in C++.
daScript is safe, strong statically typed language with paranoid error checking. However it has very robust type-inference engine, so most of the time you don’t get to decorate your code with type information. Safe code tends to look simple. Unsafe code needs to be explicitly specified and decorated.
daScript is an embedded language. It plays nice with your C++ codebase. It compiles with the same compiler on the same platforms. There is no issue of ‘this platform does not support JIT’, because it compiles to C++ ahead of time. It’s heterogeneous when it comes to interpreted vs precompiled - you can patch your precompiled-to-C++ scripts with some interpreted code live. It provides very deep integration mechanisms which goes way beyond simple binding to make sure things like ECS framework get all the additional information they need on how the data is actually used. We get to control how we interpret daScript, as well as what kind of C++ code we generate.
daScript is a language designed for productivity. It has a very fast compilation time. It supports hot reload. Typically code changes are at the tip of your fingers; with the correct setup you can start the application and develop entire new features without ever restarting it.
Developing a custom solution might seem too complicated for most developers, so what is it that makes one create something new and complex?
Some say necessity is the mother of invention. At some point, it becomes pretty obvious that no amount of evolutionary measures would allow making the next step in the quality of the development. Having the right tool for the job is a sign of good engineering.
In the case of daScript it was pretty obvious that information necessary for the ECS framework would be very hard to extract out of the C++ program. That the engineer would have to undergo an error-prone process of decorating the code a certain way, as well as paying maintenance cost for those decorations - to provide framework with information necessary to perform efficiently. Add performance difference to the equation (daScript prototype was 10 to 35 times faster than LuaJIT on the case study) and suddenly new language seems like a worthy investment.
LuaJIT was a big inspiration. It’s spectacular how much it could do for the language which was not designed with the top end performance in mind. Doing some of the similar stuff with the language which can actually help instead of getting in the way is very rewarding. Beating LuaJIT interpreter is no small challenge - it provides good baseline benchmark, no other interpreted languages come close.
When it comes to the language itself, ideas come from various places: Kotlin, Python, Ruby, Lisp, F#, HLSL (!!!) just to name a few. Having ‘something like that only crazy fast’ was one of the key ideas.
Typing less clutter was the other. Having simple but strong generics was a must from the get go. Statically typed languages without generics and significant type inference are typically verbose and hard to write in. Declarative generic programming is extremely counterintuitive as well, so we went the other way with compilation-time conditions and explicit… well, explicit everything.
Focusing on functional programming and data design, as opposed to classic OOP was another conscious decision. It has to be fast, and classic OOP is just not.
Unity Technologies shared the retrospective highlight reel below back in 2015 to show the progress they made in a decade:
Please tell us about your time costs. How long did it take you to set up the core? What are your plans?
It took about 3 months to get the first prototype up and running in the engine. It's been a tad over a year since, and the language has matured a lot. It now runs in production on both server and client, and the amount of code only grows.
Our current roadmap looks like this:
Finish the language to spec. Currently, there are few big things missing
variant types
strong pattern matching
native support for generators (yield)
native regex support
Rewrite daScript compiler and everything else there (but the runtime) in daScript
This should allow exposing AST in read-write mode to daScript itself. Currently, type and function annotations and other complicated macros have to be written in C++
It should also be a lot less verbose and way easier to maintain
GPU backend. Currently, AOT is limited to C++, but there is really no fundamental reason for that
Standard library and additional modules
There are plenty of things missing in the standard library, especially to the functional part of the language. Once ‘generators’ are in place, a lot of the missing HOF should follow
There are several de-facto standard libraries, which could use daScript binding out of the box. PugiXML, RapidJSON, UriParser etc, etc
Optimizations. Lots and lots of optimizations.
daScript is already an optimizing compiler with significant capabilities, which will only grow
daScript already has a blazing fast interpreter. There is a good reason to make it even faster
daScript outputs fairly decent human-readable C++ code for the AOT. However there are things we’d like to do to make result C++ code more robust
Even though it's too early for the standalone daScript, it's already very obvious that LLVM backend can produce significantly better code than AOT backend. daScript compiler knows things about use code - things like alignment, aliasing, dependencies, etc - things which are very hard to convey to C++ compiler. So at some point, there will likely be LLVM backend and ability to use daScript for the full development cycle as a standalone language - but I don’t see it being very soon.
Boris Batkin, the developer of daScript
Make sure to read more about daScript here.
The Story of Voxel Farm
Voxel Farm is a procedural voxel engine that allows users to create massive worlds for games, movies, TV shows, and business applications. Allow your users to build and destroy at will in a truly sandbox environment. The team's CEO/Founder Miguel Cepero joined us to answer a couple of questions regarding their flexible platform.
What is the idea behind Voxel Farm?
Creating large 3D scenes is very difficult. I'm talking about scenes as large as entire cities, where you could potentially enter any building, go to any room, and find all sorts of objects in there, from a chair to a needle. Now imagine an entire planet full of such cities.
The main challenge is that this is an absurdly large amount of data. And 3D data is not only for rendering. You also want to understand the data, that is, how it has changed across time, which features are closer to others, etc. We created the Voxel Farm engine to solve this type of problem.
What's inside the core of your engine?
The Voxel Farm tech provides a unified way to deal with massive 3D datasets. Its main advantage is that it is capable of breaking down any type of task into tiny chunks that can all run in parallel. A single rendering or spatial query can be resolved in a small fraction of what usually would take.
Our tech also integrates natively with rendering engines like Unity, UE4, UNIGINE, etc. This is a key feature: you do not want to re-invent the wheel when it comes to rendering, handling sound, mesh animation, VR/AR support, etc. With Voxel Farm, you get to leverage all the amazing things these mature products can already do.
Please tell us about some success stories behind your engine. What was it used for?
Our earliest success story is about user-generated content. The cost of creating large game worlds is very high. In traditional game worlds, everything a player experiences must be created by the operator of the game. The Voxel Farm tech, however, allows players to create content for other players, using simple yet powerful tools. This was the vision behind EverQuest Next and EverQuest Landmark, both products by Sony Online Entertainment, which adopted Voxel Farm around 2013. The player creations in Landmark surprised everyone in terms of their quality and sheer entertainment value.
We also had great success in the geospatial industry, which has very similar problems to the entertainment industry. It turns out if you are operating a mine, you want to track everything that is going on, and this is a vast amount of data as well. You want to ask questions like "this pile of dirt here, when and where did it come from?", "how many resources would we harvest if we dig here?" or "show me what has changed since last week". Since our tech is designed with this in mind, this has led to very successful applications in this industry. We are currently working with some of the largest mining companies in the world as they reshape how their operations run.
We are building a spatial platform using a platform-as-a-service model. No matter if you are creating a game, or dealing with a construction site, by using our platform, your teams can share and collaborate around all the 3D data involved in the project at once. This will be natively integrated to the game engines, and all the complexity of handling and rendering massive datasets will be virtually eliminated from your workflow.
Miguel Cepero, CEO/Founder at Voxel Farm
Learn more about Voxel Farm here.
The Story of DRAG
The fourth case worth mentioning is the story of DRAG. A couple of years ago we've published an interview (conducted by Kirill Tokarev) with two brothers Thorsten Folkers and Christian Folkers building a new racing game using their proprietary engine.
They started developing the engine in 2001 as a side project.
Thorsten Folkers: I would say the engine started to look promising around 2011 when so much effort already went into it that it would be a shame to not use it. We then made plans to take it a lot further, but in order to build a usable engine we also needed a use case," said Thorsten Folkers.
Brothers managed to add flexible suspension animation, driving physics, wheel blur, local split-screen, online mode, and more. The most impressive part though is probably the physics and the way their engines deal with vehicles. They built a proper 4-suspension physics which simulates the car in a highly realistic way.
Thorsten Folkers: You can have understeer/oversteer like in a real rally car. The car is all-wheel-drive with a power split of 20-80 front to rear and has a 50/50 weight balance. Players experience weight transfer under braking or acceleration which can cause lift-off oversteer. If you are going to fast and slam the brakes mid-corner you will experience understeer, just like in a real car As a racing driver, you can use all these effects to your advantage. We set up the car so that it has a perfect balance in terms of over- understeer but because it is a race car you won’t have electronic helpers like Traction-Control or Anti-Slip Regulation (ASR). Back in the golden era of Group-B rallying, it was just about you and the machine.
They continue developing their project and sharing progress on Polycount, so make sure to follow their thread, and don't forget to read our interview with Thorsten to learn more about DRAG.
The cases we discussed prove that developing a custom technological solution is not an impossible task. Yes, it's a complex task that requires a certain amount of dedication, but you can actually build something that will stand out and provide flexibility. You need to consider all the costs, of course, as figuring out the code always takes time.
Do you know some other curious cases we should feature? Please share your own cases in the comments.
Interviewer and author: Arti Sergeev