Santiago Montesdeoca, CEO of Artineering, talked about the new node-based real-time engine for stylized CG, discussed its core features and advantages of it for 3D artists, shared resources on how to get started with it as well as mentioned key current integrations and future plans.
We are Artineering, a small startup developing software to create stylized and non-photorealistic 3D imagery and animations, in real-time. Building upon years of research in the field, we are passionate developers and artists creating the tools to produce any visual style imaginable within 3D applications.
I am Santiago Montesdeoca, Ph.D., the founder and CEO of Artineering. I did my bachelor’s degree in Audiovisual Media at the Stuttgart Media University (HdM) in Germany and my Ph.D. in Computer Graphics at the Nanyang Technological University (NTU) in Singapore. My background includes working at Lucasfilm Animation Singapore, Inria Grenoble and Entrepreneur First. I was originally a 3D artist, but I got into software development as I felt creatively constrained using 3D applications when I tried achieving different looks. I see limitless opportunities outside of photorealism, so I started Artineering in 2019.
Alexandre Bléron, Ph.D., is the lead developer at Artineering. He did his master’s degree at Grenoble INP Ensimag and his Ph.D. in Computer Graphics at the Université Grenoble Alpes, France. Alexandre's background includes working at CGG, Inria Grenoble, and NTU Singapore. He has always been fascinated by digital art and deconstructing it using computer graphics.
Adèle Saint-Denis is a developer and our latest addition to Artineering. She did her bachelor’s and master's degree at the Université Paul Sabatier in Toulouse specializing in Computer Graphics and Image Analysis. Adèle's background includes working at Unity, Inria Sophia Antipolis, and IRIT. She is passionate about creating beautiful pictures with code.
As an agency, we have contributed to developing the technology behind the look of a few projects, such as Fú by Taiko Studios and the Covid-related medical animations by AXS Studio. We are currently working with Nuctopus Studio, developing the tools to achieve the look for their next feature film and with Shad Bradbury on his passion short film Run Totti Run.
Flair was conceived to solve these issues by bringing to any 3D artists the ability to completely modify the way their renders look in real-time within multiple 3D/2D applications — with an accompanying artist-friendly toolset.
Image-processing pipelines that define a style within Flair in real-time are not hardcoded but instead defined through a node graph. Based on what controls are defined, it can also be augmented with a 3D toolset to interactively modify and art-direct arbitrary object variable images (AOVs aka. gBuffers). This way, the style will not only be controlled through global sliders in image-space, but also with procedural noises on materials, painting on objects and custom volumes in object-space. This additional data is rendered onto custom AOVs/gBuffers in real-time and can be used extensively in the style to augment/change the rendered look.
In a way, Flair is meant to become the natural evolution of our Autodesk Maya plugin MNPRX. But instead of only providing hardcoded 3D styles, Flair is fully customizable and has a better toolset that works across applications. It is a graphics engine that will be able to be plugged into 3D applications, game engines, and compositing applications — offering the advantages of node-based real-time image-processing in the GPU to otherwise offline or hardcoded workflows.
For the technically inclined, they can write any kind of GLSL shader and, in the future, compute shaders to do any sort of post-processing and compute work.
Shaders in Flair provide parameters to the non-technical artists to connect the nodes and modify the underlying algorithms without having to touch code.
Control definitions inside of Flair allow artists to request tools from a 3D application on-demand to modify object-space (3D) information and create AOVs upon request with the required data.
Flair can be used as a standalone for image-processing and compositing. However, it needs to be integrated into a 3D application to stylize the rendered results in real-time and provide the art-direction tools in object-space.
For now, we are concentrating in getting Flair to work well with Autodesk Maya and Nuke. However, we are planning on integrating it with Blender, Unreal, Unity, and other 3D applications in the future.
We want to bring Flair out as soon as possible, but unfortunately, have limited resources to develop everything we have planned for it. That is why we are currently running a second wave of alpha testing tied to a small user study to focus our future development. The user study will allow us to understand what are the most sought after features so that artists can benefit from using Flair in production.
Some of these features include computing nodes, loop nodes, group nodes, tiled rendering, offline (CPU) rendering, temporal super-sampling, a python API, etc. By participating in the user study, you will gain insight into these features, and the chance to directly vote on them to guide our future efforts!
If you wish to participate in the alpha testing phase and the user study, make sure to sign up here. You only need a Windows 10 computer, a dedicated GPU, and Autodesk Maya 2018+ to test all available features.
Based on your feedback, we will be adding the most popular features to the beta version and once we iron out most of the issues, we will proceed with its release sometime in 2021. Once released we will continuously expand the shader library, add new tools/features, and integrate Flair to other 3D applications and operating systems.