VIZ Interactive's Jonathan Paquin Lafleur told us about The Depiction Engine and its capabilities, discussed the future of geospatial technology, and shared some use cases showing how it is being used nowadays.
The Deception Engine
My name is Jonathan Paquin Lafleur, I am a Software Developer and Founder of VIZ Interactive, located in Montreal, Canada. I have worked on 3D engines, CAD(Computer-Aided Design) software, and Augmented Reality experiences, just to name a few.
VIZ is a 3D graphics engineering service provider, we accommodate a wide range of 3D software development needs, such as architecture, research, or prototyping in outsourcing or co-development modes.
The Depiction Engine is a personal project I started in 2019. Having been exposed to a lot of geospatial projects, such as smart cities, where buildings and terrain data have to be rendered in real-time, I thought it would be interesting to bring such tech to more popular 3D engines like Unity and see what developers do with it.
I tried to abstract away the complexity of developing large worlds by upgrading Unity with a "64bit Transform" implementation, allowing developers to create solar system-sized scenes. Planets such as Earth can be generated in two clicks using data streaming mechanics and a curved terrain implementation, allowing for the creation of spherical planets.
How Does It Work?
Like most real-time 3D rendering it is a matter of outputting geometry density only where it can be seen. The environment streaming works like this:
- The planet information such as terrain, buildings, or others is partitioned in quatree node structures, which are persisted in data sources locally on a drive or remotely accessed through a web service. The engine determines which node to load based on the position of the camera making sure distant nodes are loaded with lower LOD(Level Of Detail) than up-close ones to keep a relatively low memory footprint.
- Data sources can contain real-world information such as elevation data describing the terrain(earth, moon, mars, etc.) or OpenStreetMap vector data describing roads and buildings. Objects such as terrain tiles might require information from multiple data sources such as elevation, color, or roughness textures which will be combined into a single rendered object. Datasource can also be configured to serve procedural information on the fly to create full fictional landscapes or supplement real-world geospatial environments where small details like rocks or grass are missing.
- Once all the information has been acquired the data is parsed and processed asynchronously(If the build platform supports it) into mesh geometry and the textures are passed to the shaders.
- When the object becomes invisible to the camera it is disposed of. Disposed objects are relegated to an object pool where they stay until recycled when a similar object has to be instantiated and the cycle continues. Object pooling reduces memory access which helps performance.
The Future of the Technology
The data currently available is often expensive, of low quality, or incomplete. Formats like Google Photorealistic 3D Tiles will be the way to go moving forward for projects looking for high visual fidelity. Serving optimized 3D scans not only improves visual fidelity, but in some cases, it also improves performance by including all the necessary tile information in a single request.
The Depiction Engine is a tool I hope can help developers and enthusiasts quickly turn their digital twin/geospatial/real-world simulation ambitions into reality. I believe the ongoing shift towards digitalization is bound to create even more demand for real-world simulation software. Providing simple tools like the Depiction Engine is the first step towards a democratization of the field and with more people involved, we can expect more tangible creative ideas and discussions. If you are interested in trying the engine take a look at the project page.
I think incredibly immersive gaming experiences can be achieved when geospatial data is used to generate environments that are familiar to the user. Simulations such as Microsoft Flight Simulator or DCS(Digital Combat Simulator) are great examples of games that benefit from taking place in a real-world setting.
The concept of a "Metaverse" where users can explore and experience meaningful social interactions in a relatable and persistent digital representation of our world is another concept I think could be explored more easily leveraging technology like the Depiction Engine as a framework for large world generation.
Visually representing geographically distant information for logistics, manufacturing, architecture, physical security or smart city systems is also very promising.