Stargate Studios Malta's Francis Ghersci has shared a thorough breakdown of the Mosta Chapel project, explaining the capturing process, discussing the team's RealityCapture workflow, and showing how the scene was rendered with Unreal Engine 5.
In case you missed it
You may find these articles interesting
Introduction
Hi everyone. It’s a pleasure and an honour to be here with you all. My name is Francis Ghersci, and I am the Head of Creative and Content Department at Stargate Studios Malta. I have been with the studio for over a decade now, and during my time here, have focused primarily on writing and directing projects for a number of our local and international clients.
For most of my career, my skills have revolved primarily around storytelling, crafting audio-visual experiences, and guiding talented individuals towards a single creative vision: commercials, short films, music videos, corporate projects, and docudramas; whatever landed on my desk was a chance to explore new creative and technical territories. I also worked as a sound designer, producer and editor on a number of our projects, and always sought to complement my understanding of the arts with technical knowledge within our industry.
My journey into 3D art began in a directorial position. I was lucky enough to work alongside very talented CG Artists on a number of my projects, constantly learning from them through the creative process. Yet, I always found myself spectating as the magic unfolded before me. That all changed around a year ago. Although I grew up with equal enchantment towards video games and cinema, I always found myself gravitating towards the former for leisure and the latter as a career path. Then, as progress in game engine technology and real-time rendering progressed to a certain point, the disciplines inevitably overlapped to a degree that I could no longer ignore. I invested in my own high-powered workstation and dedicated my daily 10 PM to 1 AM (for a whole year) to learning Unreal Engine. It was tough, but I knew that as a filmmaker I too had to evolve with the times. I devoured online resources from very talented individuals like Unreal Sensei, William Faucher, Epic’s treasure trove of educational material and more, and focused primarily on developing my knowledge of cinematography within a virtual environment.
My department is now focusing heavily on integrating game engine technology into the world of filmmaking, and we have since developed several Previs/Techvis sequences and digital environments for virtual recces and production design on some pretty large projects. I’d love to share them with you all here, but until the films are released to the general public, our work has to remain strictly confidential. We are also developing cinematic experiences like the Mosta Chapel video for cultural preservation in the digital realm, as well as incredibly large photorealistic digital environments for eventual integration in virtual production volumes (more on that later).
Stargate Studios Malta
Stargate Studios Malta joined the global Stargate network in 2012, and has since been offering high-end digital production and visual effects services to the international market. The incredible team has since worked on a number of interesting productions from all over the world, many of which can be viewed in our latest VFX Reel below:
We’re now offering previs solutions, LiDAR and photogrammetry services, and even expanding our post-services wing to better support large-scale productions shooting on the island.
Examples of our other LiDAR and photogrammetry projects:
As for the project I am most proud of, I would like to share a short film that I directed on a very tight budget, and that was produced entirely in-house. It was created to showcase the talent of Architect and Designer Jonathan Mizzi and his team at Mizzi Studios, and to demonstrate how technological progress and cultural heritage are fully capable of co-existing. I hope that you will enjoy all of the disciplines of our studio combined into a single experience.
The Mosta Chapel Project
The Mosta Chapel project began as an R&D exercise for our growing digital-asset department. We wished to test our workflow, particularly when creating high-quality digital assets through LiDAR and photogrammetry on a tight deadline and without waiting around for “perfect” weather and lighting conditions. When offering such services to large-scale productions, it is often impossible to wait indefinitely for that overcast day (especially in sunny Malta), or hog film sets for two to three days to capture all the data. So we needed to be quick and needed to see when and where our efforts failed.
I was adamant to capture a site that didn’t pose too many technical complications and that, if successful, would allow us to create an entertaining experience for viewers and potential clients. To do this, I entrusted the task to Andre Grima, an incredibly talented CG generalist and the ex-CG supervisor of our studio who is now spearheading the development and application of this technology. He selected the Chapel of St. Paul The Hermit, a remote location etched into a cliff, and that can only be accessed after venturing through a long and winding gorge.
It was a pain in the neck to reach. However, if successful, it would reward us with an awesome digital asset. In addition to this, the cave-like nature of the space would allow us to build a fully digital environment with just our scanned data (and perhaps a little help from Quixel’s foliage Megascans assets).
The Capturing Process
The process began with a site visit to survey the area – trek one. Once there, the team calculated the minimal number of LiDAR scans that would be required, observed the orientation and path of the sun, and took note of any potentially unforeseen challenges. As for the unforeseen challenge? The area was surprisingly home to large flocks of wild pigeons; restless residents that would have definitely interfered with the capture process. After returning to the studio – trek two – the team reviewed the data and prepared their plan of action.
They awoke the following day hoping for coveted cloudy skies that were forecast. Unfortunately, Murphey’s Law made its usual appearance, and we were greeted with the dreaded golden sunshine instead. Why is this a problem? When acquiring texture data through photogrammetry, one must really aim to get this done during overcast days and soft lighting conditions. Direct sunlight creates harsh shadows that shift over time due to the lengthy scan process. This could not only confuse the software and hinder the point-cloud generation at later stages, but would also result in shadows baked into the textures of the asset itself. For VFX matchmove assets and site survey models, perfect texture data might not be crucial, however for high-quality digital assets for use in video games and film, this would definitely not be an acceptable outcome.
Luckily, the cave-like nature of the site allowed for a natural canopy and shield from the harsh Maltese sunlight. In addition to this, the team set out early – trek three – to be on site before sunrise. Dawn and dusk, although not ideal conditions due to a constant shift in light intensity, allow for soft global illumination that minimizes harsh shadows. As for the Pigeon Challenge, a loudspeaker was used to create momentary blasts of sound that would keep the prying birds at bay. When flapping wings had all but faded, photogrammetry began.
Peter Pullicino, a multi-talented junior member of our team and an already experienced photographer, cinematographer, and drone operator was responsible for this process. He began “tiling” the environment in layers, starting wide with significant overlap of every image, and then repeating the process again and again, moving closer and closer. Be warned, this is a long, tedious and repetitive process that is crucial for the final result. It is also important to make use of a wider lens like a 24mm (on a full-frame DSLR), and with an aperture that is no wider than F11. A shallow depth of field should be avoided at all costs, and the shutter speed should be set to a value that negates blurry images through camera shake. You could use a tripod to solve this, but you’d never finish the job. The low light of morning made matters worse. The ISO had to be lifted to properly expose the shots, but more ISO results in more noise, and noise is also something that should be avoided. Luckily this can be reduced in post with software like Neat Image, and we also made use of a Canon R5 for this, a camera that handles noise pretty well, even at higher ISO values. The mastery of this process involved constantly balancing the camera’s settings as the morning light increased, Peter doing his utmost to retain an overall constant exposure throughout. When the sun had risen too high, and the harsh sunlight had become an issue for photography, that is when LiDAR stepped in.
Andre was responsible for this part and made use of our Faro M70 scanner to acquire 8 LiDAR scans of approximately 27 minutes each, at different areas of the site. When using such a technology, it's always great to compare it to setting off a flare in the dark. Point-cloud data is generated where the "light" hits, and the aim of the LiDAR operator is to complete the day with as few shadowy areas as possible. Significant scan overlap is also an important factor to consider, but considering the 360 nature of the acquired data, this is generally achieved in most instances.
LiDAR scanning now complete, the team enjoyed a little lunch break (well earned) until the sun shifted behind the cliffs and the site was once again in shade. Then, after a few more blasts from the speaker to clear the gathering avian crowd, it was our turn to take flight - aerial photogrammetry. Peter made use of our DJI Mavic 3 Cine for this, a solid piece of equipment that boasts 20MP stills, a 4/3 CMOS Hasselblad Camera and 1/2-inch CMOS Tele Camera (which we didn’t use) and the option to shoot at f11, ideal for that deep depth of field we so deeply covet. The same tiling technique was used as with the DSLR, but this time, the priority was to acquire both texture and structure data that both LiDAR and DSLR were unable to reach. An example of this would be the roof of the chapel, which we were unable, and also rather unwilling, due to our avian guests, to reach.
8 LiDAR scans and 3,334 photos later, the satisfied team packed up their gear and made their way back to the studio – trek four – to safely store the precious data on our studio’s servers. I would also like to take the opportunity to thank Luke Zammit, David Farrugia, David Dimech and Miles Busuttil from the content department for assisting with carrying the gear and refreshments all the way to the chapel and back. Additional equipment was used in the form of tracking charts and balls, colour charts etc. but getting into all that would be a little beyond the scope of this already detailed explanation.
Processing the Data and Preparing the Meshes
Processing and refining the vast amount of data is definitely the more laborious and time-consuming aspect of such an endeavour. A process that really has to be explained to clients when last-minute requests with tight turnarounds are expected.
The sequence of events begins with bringing in the LiDAR scans into a software that registers and aligns the point-cloud data. We make use of Faro SCENE for this, the scanner’s proprietary software. And by we, I mean Andre, since he once again took care of this part for the Mosta Chapel project. The software makes use of the scanner’s generated metadata from its internal compass, GPS/GLONASS, and electronic barometer for sensing height, together with the point clouds themselves to figure out where the scans should be projected and registered (aligned) in 3D space. It is mostly automated, and because of this, significant care is taken by the operator to ensure that this information is accurately captured on site. If, for some reason, the scans do not align automatically, there is a brief moment of anguish, after which manual registration would have to occur. Believe me, it is not a fun process. The software then allows the user to view accuracy reports and also bring down the several billions of points, to mere… billions. Clipping regions of unnecessary points, as well as unwanted points brought about through reflections from glass and other shiny surfaces, will also be removed at this stage. We often also have to painstakingly remove slithers of human invaders generated from the odd passerby who wanders too close to the somewhat “strange” and seemingly unattended scanner… mid-scan.
Registered Point Cloud in Faro:
Whilst this process is taking place, the images captured from the DSLR and Drone must also be processed and sorted. We first remove unwanted noise from the images using Neat Image and then import the material into Adobe Lightroom. Lightroom is a great tool for batch processing and is used in this workflow to remove any blurry or soft images from the project, balance exposures and colour temperature, and slightly reduce contrast for better-looking textures. Then, once both LiDAR point clouds and photogrammetry elements are good to go, we would insert them into another software that generates the mesh.
To do this, we make use of RealityCapture, an incredibly powerful software that was recently acquired by Epic. The steps are as follows. One first imports the registered point-cloud data, and because the Faro M70 captures its own images whilst scanning, the points are also coloured. Then, once the thousands of photos are imported into the project, RealityCapture spends a few hours sifting through all the material to align the images in 3D space to generate an updated and even more detailed point cloud. It is at this stage that one truly sees the benefit of employing both LiDAR and photogrammetry together. LiDAR is perfect for millimetric structural fidelity, whilst photogrammetry is ideal for filling in any gaps and allowing a more complete and truly wonderful visualisation of the site or object.
Point Cloud in RealityCapture with added photogrammetry elements:
Then, once more clipping regions have been applied, more unwanted points removed, and any rogue point clouds assimilated into the main cloud through what are known as Control Points (as well as fine-tuned RealityCapture’s multitude of settings) the mesh is generated. This does take significant time and is highly dependent on the number of points and level of detail required in the mesh. Once complete, and after the user has spent a few moments staring in awe at what they’ve created, one can also choose to bring down the number of triangles from let’s say 800 million, to 300 million, to 20 million and so on. This decision will mostly hinge on the intended application and use of the model itself.
Generated mesh in RealityCapture:
Then, for the final step, the photos that were used to further enhance the point cloud are now projected onto the mesh to generate the textures, as well as the diffuse and normal maps. In the case of the Mosta Chapel, we chose to settle with 50 million triangles and a total of eight separate 16k texture sections holding normal and diffuse data.
At this point, a 3D artist can also choose to import the model into software like ZBrush, Maya, or Blender to further adjust or fine-tune any undesired sections. However, do beware, 50 million triangles are quite a big ask for most software and consumer-level workstations, so do get ready to further reduce the complexity of the mesh or split it into several parts if you wish to further work on it.
Rendering, Lighting & Presentation
I chose to present the Mosta Chapel in Unreal Engine 5 for a few main reasons. First and foremost because it's our department’s current go-to game engine solution and what I chose to learn this past year. As for the rest, Unreal’s ability to easily handle photo-realistic environments with immense ease, the free or very affordable community-generated tools available on the marketplace, and the wonderful drag-and-drop Megascans asset library are just amazing.
And speaking of amazing, Nanite is such a revolutionary technology for handling such complex assets without needing to painstakingly optimise meshes and generate LODs. How can it be ignored? We literally just imported the mesh into the level, set up the virtual textures, and were good to go. I could freely navigate the world and enjoy our asset from afar, as well as every detail up close, with Unreal Engine seamlessly doing all the work behind the scenes. And we filmmakers do love our behind-the-scenes. As an aside, I did have to add some Megascans foliage assets to cover up the absolutely disastrous results of what LiDAR and photogrammetry generated with the Chapel’s actual foliage. Leaves swaying in the breeze should definitely be avoided at all costs. The railings too had to be re-modelled and added separately, since we couldn’t spend the time required on-site to capture such fine details. It was easier to just model a quick asset and drag it into the scene.
When it came to setting up the lighting and the environment itself, I cannot but mention our incredibly talented Mohamed Shawaf here. He recently joined our team as an Unreal Engine specialist, but we soon discovered his wealth of VFX knowledge and ability to just make stuff work and look fantastic. He has been instrumental in developing and customising tools for us to use within our environments and often solves so many of our technical issues. And always with a smile; such an overlooked quality in our industry.
I lit the scene using Mohamed’s custom version of Ultra Dynamic Sky and employed UE5’s default global illumination and reflections system, Lumen, to avoid the hassle of baking lights, reflection captures, setting up lightmass importance volumes and so on. I could literally focus my energy on the joy of freely building a cinematic experience. And that is just what I did next. I chose a stock music track that sets the mood for the experience I wished to create, set up my virtual cameras, tweaked my post-process volume and started “shooting”.
To do this, I created a master sequence that would host the music track as well as control the major parameters of certain assets and systems. In this case, I wished to animate the time of day in a sort of time-lapse fashion. This would help me accentuate the beautiful lighting capabilities of Unreal Engine, as well as sculpt the various sections of the Mosta Chapel in different lighting conditions, showcasing in mere moments what is possible with such technology. I added the time-of-day parameter of the sky system into the sequence, set the adequate start and end values, hit space, and simply enjoyed the show. I then created sub-sequences for each shot and integrated them into the master sequence.
Master Level Sequence in Unreal Engine 5.1:
Shot Level Sequences:
This approach let me fine-tune moments of the edit, without needing to move dozens of keyframes every time I felt a cut was too early or not featuring the right moment. It is simply wonderful to shoot and edit at the same time, with such a degree of control, and in real time. Once the image sequences were rendered out from Unreal Engine, I simply had to import them into a video editing software like Adobe Premiere, and most of the work was done. I’d just add some titles and a few fades, and the video was ready for upload. Or so I thought.
When it came to rendering within Unreal, this process was a little more problematic than anticipated. We had previously rendered out stunning png sequences and never really encountered any issues. Everything was set up in the movie render queue as previously tested: the right resolution, adequate values for temporal and spatial anti-aliasing, any console variables to push the quality in certain areas and so on. One can find amazing tutorials online that really walk you through setting all this up. However, upon closer inspection, I noticed that something weird was happening to the textures in some rendered shots. The textures themselves seemed to be degrading with every frame, moving from rich and detailed to soft and smudgy. We thought it might be a texture-culling setting, however, this had also been disabled. Mohamed spent some time trying all sorts of things and then found the solution to our problem.
Texture Degradation:
High-Resolution Render Technique:
To fix this, we re-rendered the png sequences using the high-resolution setting. This process essentially splits up every frame into a series of tiled sections that also removes texture size constraints or memory limits on the GPU. The screen resolution was also set to 200%, allowing a maximum resolution of 4K per tile, which really boosted the overall final outcome. The slight drawback? This technique did not allow post-process volume effects, such as lens flares and god rays. Naturally, I couldn’t let go of those cinematic traits, so to fix this, Mohamed simply composited the high-resolution PNGs with the standard ones to create a powerful blend of the two.
Compositing in Nuke:
Conclusion
Photogrammetry is primarily a highly technical process, so before actually lifting a camera (or buying one), it would really make sense to absorb as much knowledge as possible on the subject. Once again, one can find hundreds of valuable videos online that explain the steps in all of their complexity. That being said, do try to watch recent videos, since new software or updates might be available that drastically improve the process or quality.
After digesting it all, those still adamant to dive into this exciting and time-consuming process should pick an “ideal” object that fascinates them. You are going to spend hours manipulating the data, so the subject matter should really be one that can sustain your attention and ward off the frustration that might build along the way. As for the subject’s qualities, the surface should definitely be matte, free from shine or reflections, and with enough texture complexity and variation to not confuse the software. Large blank areas should also be avoided when starting out. Then, if the budget to set up adequate soft lighting is tight, simply wait around for an overcast day. Capture your photos by rotating around the object with significant overlap, starting wide and moving closer and closer with every rotation. Then, once you’ve captured, sorted and treated all of the images, you should probably download software like Reality Capture. Although an unlimited license for this software can be purchased, the price tag might feel somewhat hefty for beginners. Luckily, users can also download it for free and purchase credit that will only be used when exporting the generated mesh. So all the time spent learning how to import the photos, generate the point clouds and meshes, and refine the overall process would only cost users time (and perhaps a slight increase to your electricity bill after all those hours of rendering). Then when the mesh is at a stage where you’re willing to pay for it, that is when you reach into your pocket.
Quick renders from one of our large-scale virtual environments, still in very early development:
The team at Stargate Studios Malta is currently working on a number of exciting projects. From VFX shots for world-famous shows, to previs, to LiDAR and photogrammetry on some really BIG projects. However, as explained earlier, we are unable to divulge the titles at this stage. What I can share are some VERY early screenshots from our exciting in-house R&D projects. Our aim is to develop our workflow and technology for environment design that uses a game engine like Unreal. These gargantuan scenes would need to function as photorealistic assets capable of playing in real-time, for both a traditional VFX pipeline, as well as virtual production. We are also designing tools within the space itself to give filmmakers and art directors total control of colour palettes, lighting and weather conditions with a few simple clicks. At the same time, once this scene is complete, it is our intention to generate some IP of our own and create some awesome animated content to share with you all.
These are truly exciting times for our studio.