Santiago Montesdeoca, CEO of Artineering, talked about the new node-based real-time engine for stylized CG, discussed its core features and advantages of it for 3D artists, shared resources on how to get started with it as well as mentioned key current integrations and future plans.
We are Artineering, a small startup developing software to create stylized and non-photorealistic 3D imagery and animations, in real-time. Building upon years of research in the field, we are passionate developers and artists creating the tools to produce any visual style imaginable within 3D applications.
I am Santiago Montesdeoca, Ph.D., the founder and CEO of Artineering. I did my bachelor’s degree in Audiovisual Media at the Stuttgart Media University (HdM) in Germany and my Ph.D. in Computer Graphics at the Nanyang Technological University (NTU) in Singapore. My background includes working at Lucasfilm Animation Singapore, Inria Grenoble and Entrepreneur First. I was originally a 3D artist, but I got into software development as I felt creatively constrained using 3D applications when I tried achieving different looks. I see limitless opportunities outside of photorealism, so I started Artineering in 2019.
Alexandre Bléron, Ph.D., is the lead developer at Artineering. He did his master’s degree at Grenoble INP Ensimag and his Ph.D. in Computer Graphics at the Université Grenoble Alpes, France. Alexandre's background includes working at CGG, Inria Grenoble, and NTU Singapore. He has always been fascinated by digital art and deconstructing it using computer graphics.
Adèle Saint-Denis is a developer and our latest addition to Artineering. She did her bachelor’s and master's degree at the Université Paul Sabatier in Toulouse specializing in Computer Graphics and Image Analysis. Adèle's background includes working at Unity, Inria Sophia Antipolis, and IRIT. She is passionate about creating beautiful pictures with code.
As an agency, we have contributed to developing the technology behind the look of a few projects, such as Fú by Taiko Studios and the Covid-related medical animations by AXS Studio. We are currently working with Nuctopus Studio, developing the tools to achieve the look for their next feature film and with Shad Bradbury on his passion short film Run Totti Run.
In animation and VFX, the image-processing to create a specific visual style happens mainly at the compositing stage once the object-space (3D) information has already been rendered. If something changes or is required in object-space to modify the style in comp, things need to be modified in the 3D application, re-rendered, and re-imported into the compositing application to visualize the style changes. This process is repetitive, time-consuming and a hindrance for creative and artistic exploration.
Game engines speed up this process considerably as they also perform image-processing within the 3D application itself, seeing any stylistic changes right away. However, they require everything to be imported into the engine and the image-processing operations to be scripted and often hardcoded. This makes them difficult for non-technical artists to modify or even create, in the first place. Setting up post-processing pipelines becomes exponentially harder with stylized/non-photorealistic graphics that require complex operations that depend on each other.
Flair was conceived to solve these issues by bringing to any 3D artists the ability to completely modify the way their renders look in real-time within multiple 3D/2D applications — with an accompanying artist-friendly toolset.
Image-processing pipelines that define a style within Flair in real-time are not hardcoded but instead defined through a node graph. Based on what controls are defined, it can also be augmented with a 3D toolset to interactively modify and art-direct arbitrary object variable images (AOVs aka. gBuffers). This way, the style will not only be controlled through global sliders in image-space, but also with procedural noises on materials, painting on objects and custom volumes in object-space. This additional data is rendered onto custom AOVs/gBuffers in real-time and can be used extensively in the style to augment/change the rendered look.
In a way, Flair is meant to become the natural evolution of our Autodesk Maya plugin MNPRX. But instead of only providing hardcoded 3D styles, Flair is fully customizable and has a better toolset that works across applications. It is a graphics engine that will be able to be plugged into 3D applications, game engines, and compositing applications — offering the advantages of node-based real-time image-processing in the GPU to otherwise offline or hardcoded workflows.
Node-Based Workflow in Flair
There are two ways of getting images into Flair. You have Read nodes to use images stored in a drive and Import nodes to use images stored in the GPU shared memory from other applications (i.e., Maya, or any future application that our plugin supports). These nodes are connected to different image-processing operations defined in Shader nodes using the widely adopted and cross-platform GLSL shading language. Once the images have been modified and stylized, they are either saved to drive or sent back to another application via shared memory in the GPU.
When we talk about image-processing in Flair, we often emphasize on the word 'across' applications. This is because artists can connect applications using Flair. For example, you can be running a scene in Maya and have a Flair node in Nuke that imports the AOVs directly from Maya, processes them in the GPU and outputs them in the Nuke composition. This happens all in real-time without saving a single image to the drive.
There are many ways to control Flair.
For the technically inclined, they can write any kind of GLSL shader and, in the future, compute shaders to do any sort of post-processing and compute work.
Shaders in Flair provide parameters to the non-technical artists to connect the nodes and modify the underlying algorithms without having to touch code.
Control definitions inside of Flair allow artists to request tools from a 3D application on-demand to modify object-space (3D) information and create AOVs upon request with the required data.
Creating Art Direction Tools with Flair
Technically inclined artists have the possibility of writing or modifying the GLSL shader code within each node. Global parameters are also defined within the shader code and allow any artist to control the effects and underlying algorithms without touching the code. For example, increasing the intensity of the edges, reducing the amount of color bleeding, etc.
The style can also be controlled in object-space by painting parameters directly onto objects, using procedural 3D noises or using control volumes. This data is rendered on-demand and can be used within the style to modify effects locally. The object-space tools within the 3D application are automatically created from the control definitions within Flair. Therefore, adding a new control is a matter of naming it and specifying which image/channel the object-space parameters should be rendered onto. The creation of the tools is then handled by the Flair plugin within an application.
Standalone Use and Integration with Other Tools
Flair can be used as a standalone for image-processing and compositing. However, it needs to be integrated into a 3D application to stylize the rendered results in real-time and provide the art-direction tools in object-space.
For now, we are concentrating in getting Flair to work well with Autodesk Maya and Nuke. However, we are planning on integrating it with Blender, Unreal, Unity, and other 3D applications in the future.
How to Get Started
Alpha testers are provided with a user manual and a series of tutorials to get started right away. There is also a growing list of example styles to modify and get inspired by. We are also doing live Q/A and peer programming sessions where anyone can join in.
In case of any questions, you also get access to a Discord server where you can directly ask us anything Flair-related.
We want to bring Flair out as soon as possible, but unfortunately, have limited resources to develop everything we have planned for it. That is why we are currently running a second wave of alpha testing tied to a small user study to focus our future development. The user study will allow us to understand what are the most sought after features so that artists can benefit from using Flair in production.
Some of these features include computing nodes, loop nodes, group nodes, tiled rendering, offline (CPU) rendering, temporal super-sampling, a python API, etc. By participating in the user study, you will gain insight into these features, and the chance to directly vote on them to guide our future efforts!
If you wish to participate in the alpha testing phase and the user study, make sure to sign up here. You only need a Windows 10 computer, a dedicated GPU, and Autodesk Maya 2018+ to test all available features.
Based on your feedback, we will be adding the most popular features to the beta version and once we iron out most of the issues, we will proceed with its release sometime in 2021. Once released we will continuously expand the shader library, add new tools/features, and integrate Flair to other 3D applications and operating systems.