80 level research doesn’t get tired of researching futuristic topics and is happy to reveal another exciting research: The use of AR doesn’t stop with games and entertainment (like Pokémon GO) and filters and masks on Snapchat or Instagram. Brands use AR for marketing and retail distribution, which changes customers’ experience by adding another interaction between a brand, its products and the end-user (consumer).
The dynamic of the Augmented Reality market growth and rapid development is undeniable.
Augmented Reality where virtual objects overlay real-world scenes falls under the umbrella term of Extended Reality. Extended Reality includes all kinds of reality-altering realities such as AR, VR, and MR (for further term elaboration look at the schema below).
How does it work?
The technology behind AR is pretty transparent and it’s not hard to develop something in-house. For example, if we’re talking about fashion AR, real-time workflow is as following:
- The Сamera captures the image of a person
- The AI is processing it and creating a “skeleton”
- Some pre-made 3D file (e.g. clothing item) is being layered to fit that skeleton
- Everything is being rendered (usually works on the base of WebGL) with the uncovered parts of the skeleton used as a mask to make the clothes look like the person is actually wearing them.
As an example, here's the exhaustive list of technologies/tools used by MOJOMOTO in the AR development:
- 3D pose detection using BlazePose GHUM FULL Al model with Media pipe.
- Mathematical Model for Landmark Registration for Camera view.
- Filament Graphics Engine for 3D AR Rendering for .glb Geometry import and deforming.
- Filament Collision, Animation, and Cloth simulation scene graph for Management.
All SDKs and Al Models are compiled and integrated with PC, Android, and iOS Development Environments:
- PC Development Environment (MS Visual Studio C++17, OpenGL 5, Fbx SDK, and math libraries).
- Mac, iOS Development Environment (Xcode Swift, Objective-C, with native frameworks C++ 17, Mental).
- Android Development Environment (Android studio with Kotlin, C++ native frameworks, OpenGL).
AR has become very accessible to users due to AI implementation. In order for Augmented Reality to work, your phone has to be able to measure the depth of the picture. Most modern smartphones don’t have an actual depth sensor, they measure it with cameras and neural networks-based software. So, with the right software, even single-camera smartphones can “do the thing” and therefore be compatible with AR apps.
The AR content can also be triggered by something. For example, Niantic’s Lightship VPS for Web enables the creation of location-based AR experiences where the virtual content is anchored to real-world locations. When the user goes to that location, they can access specific AR content right from the browser. Once the camera is open, the computer vision identifies the marker, triggers, and puts augmented reality on top of that marker.
Co-Founder at Bods Inc, Oleksandr Shatalov
Today, there are many decent motion capture programs that can track body motions and build a skeleton on the screen through regular videos without any sensors. It’s done with the help of trained neural networks. Then you can use this skeleton to add anything from facemasks to clothes on it in the AR. The technology is relatively easy and transparent.
Sneak peek of existing solutions
Given that almost every smartphone has all the needed AR requirements and the fast-growing nature of investments in AR development, the AR industry becomes more and more competitive. Which brings new solutions to the market together with some of the major companies investing in their own SDK (software development kits).
Important to mention is that all these AR solutions are integrated into the Unity Game engine. When it comes to developing your own app or a specific solution for a brand/company, there is no need to develop your own technology for body capturing and rendering from scratch, the existing technologies have frameworks, which can be implemented in various projects.
Director of Product Management at Niantic, Tom Emrich
The devices (smartphones) that people have been carrying around in their pocket for the past decade continue to become more and more powerful AR machines. Smartphones continue to see improvements in cameras, chips, and displays. And of course, tools and platforms are being improved, too. The sweet spot in AR development has been reached, and it opens up more meaningful possibilities for brands, developers and end users alike.
Fly in the ointment
Current AR technologies and apps have serious hardware-related constraints. Even though modern smartphones have pretty great and capable processors that evolve every year, they aren’t supercomputers. Their technical capacities cannot provide stable, high-quality AR visualization with in-real-time rendering.
It also prohibits artists from creating virtual augmented objects that actually look realistic. To achieve realism design has to be detailed and heavily textured, which requires usage of a lot of polygons, which increases the size of the file. None of that goes well with AR. There are both polygons and file size limitations.
Creative Director at MOJOMOTO/fasHub inc. Lars Rahbæk
There are definitely technical limitations in creating clothes. For example, when MOJOMOTO used “Clo3D” to make things very realistic, they worked with 50 000 polygons. However, when they create projects for AR, there are a much lower number of polygons allowed. Moreover, on Snapchat for example, there is a maximum of 8 MB for any uploaded filter.
What does the future hold?
Considering where the future perspective of AR lies, informants claim that cloud rendering with edge computing, 5G, and AI are going to be the answer to the hardware capacity limitations. Because even with the rapid development of smartphones, it would still be difficult to fit a serious computer machine in “your pocket”. In the future, the problem of display and virtual fidelity should be addressed as well with devices that can be optimized for high-resolution displays to enable human eye resolution.
Director of Product Management at Niantic, Tom Emrich
AR experiences will continue to look and feel more real as a series of emerging technologies from various players come together to form the metaverse stack and usher in the next wave of computing. Through the combination of edge computing, 5G, and AI with advanced wearable devices, we’ll see continued blurring of the digital and physical worlds. More immersive and magical AR experiences will involve not only visuals, but also spatial audio and haptics and bring new meaning to specific locations in the physical world.