Artomatix: AI Powered Texturing Tool

The developers of Artomatix showed what the future of material production might look.

The developers of Artomatix showed what the future of material production might look.

About the tool 

Artomatix is a tool designed for texture, character and environment artists. It automates the most time consuming, mundane jobs that artists are currently doing manually with other tools. On the surface, it resembles the tried and trusted tools that artists are already familiar with: Photoshop, Substance Suite, 3D Studio Max, etc. Under the surface, it contains the world’s most advanced Creative Artificial Intelligence capable of taking the least valuable but time-consuming work from an artist’s plate without them having to compromise on quality.

For the last six months, we’ve been really focused on putting the finishing touches on a full-featured professional grade tool. It’s been in alpha for the last two months and we’re thrilled to see the excitement and praise it’s quickly gathered. We’re really happy to introduce the next generation of material authoring software which we believe will quickly become an essential part of every texture artists toolbox.

Here we show a simple use case for Artomatix, we have an untextured army vest model and a camouflage material we would like to go on it. Artomatix is doing two things here: (1) It’s taking in a small example of a material and extrapolating it out to cover a greater area by creating a new but similar version of that texture without having to tile or repeat features and (2) It’s doing this while being directed by the model’s UV space, so it’s creating new texture directly on the surface of the model, skipping the need for manual painting tools such as Substance Painter or 3D Coat.

Because Artomatix can create new textures based on examples, it can re-imagine new versions of this texture or, by changing the input example, it can quickly and automatically change the look and style of the final output without an artist having to manually redo everything from scratch.

This is just one feature within Artomatix. When designing the tool, we were mindful that artists need a fully customizable, non-destructive workflow, so we framed everything around a graph-based interface that lets artists script different Artomatix functions together to create new, more powerful operations.

This is an example of a typical node graph made within Artomatix. Starting with a 1K diffuse texture, we generate PBR maps before splitting in two directions:

(1) In the green frame growing our material naturally to a 2K size, before exporting.

(2) In the yellow frame, again growing our frame, this time to 4K, while also applying an ignore mask to remove some unwanted features. A node-based graphical user interface is only as powerful as the nodes themselves, so we’ve focused a lot of our attention on addressing some of the largest pain points in the industry; to highlight a few:

(1) Up-res: This node takes in a material and enhances the resolution by 2x or 4x. It uses a neural network that’s been trained to hallucinate new plausible details where they didn’t otherwise exist in the original material.

(2) Texture Segmentation: As a scan-based workflow becomes more commonplace, artists are spending more of their day drawing masks which is quite time-consuming. We’ve developed a new feature which automatically detects different unique individual textures within a scan and automatically generates masks. This can be fed into a Texture Mutation node to create a self-tiling power of two versions of each texture within the image, or it can be fed into a Texture Painter node and an artist can directly control how the different textures are composed.

(3) Texture Painting: In the last few years its become standard to paint textures directly on a model, rather than into UV unwrapped images, so we knew that we’d need to build on-model painting controls and extend our core synthesis functionality to work given input and direction from the artist. We’ve built a painting node that lets artists directly guide texture mutation both in 2D and 3D.

We’ve also developed a ton of other powerful nodes such automatic seam removal, compression artifact removal, light gradient removal. We’ve even created a single node called “Material Generation” that offers a comparable feature set to other software packages such as Substance B2M, Knald or Crazy Bump.

Working with scanned materials

It’s important to note that our tech is not procedural. It is based on Artificial Intelligence which makes a big difference and has implications for the end user experience when working on scanned materials. Our approach is to actually automate large parts of this workflow and cut down on the time it takes dramatically.

Procedures, on the other hand, are handmade scripts that use math and random numbers (noise) to make art. These scripts need to be manually tailored to a specific piece of art, and it’s very labor intensive.

Procedural tools present a number of challenges when applied to a scan-based workflow as they don’t have any analysis or understanding of the content they’re grooming. Consequently, there’s no one general purpose procedure that works well on what is arbitrary data. The ones typically in use today suffer from artifacts and general quality issues. One such strategy is known as Texture Bombing, which takes image data as input and randomly mixes patches around, blending them together. This can sometimes work with very simple textures that don’t have a structure or unique features. The other general purpose strategy is known as Graph Cuts. This second approach is less about blending patches together and more about finding ideal cuts between patches which minimize seams. Graph Cuts have two problems:

(1) They don’t fully remove seam artifacts, rather they reduce and redistribute seams.

(2) Getting the best cut makes it difficult to get the size you want, typically a games artist will want their final texture to be a power of two.

This is difficult using the Graph Cuts method without resizing the scans, losing both unique features and high fidelity details. General purpose procedures can never create anything new. When you do procedural, you start with a fixed algorithm and you have to tailor your data to it. When you go to A.I. you start with data and the algorithm tailors itself. So A.I. can be a one size fits all solution that can produce great results on a wide variety of scans. Where A.I. can sometimes struggle is when there’s very little data to extrapolate out. This can happen when a scan isn’t a texture, it’s just a few very unique features and the A.I. can’t find enough redundancy to learn the key aspects of that texture.

In general, though, I think a huge testament to our tech approach is that we haven’t even released a full-fledged product yet and many leaders in the scanning space have already found ways to use us regardless. We’ve built a better mousetrap and it is fair to say we are getting an incredible response from the market.

We’ve been incorporating feedback from the Unity demo team for over 18 months to optimize our solution for the next generation of scanning workflows. Our web prototype was their method of choice (18:50 & 23:50) for grooming the scans they captured for their recent photogrammetry demo into self-tiling, ready to go assets.

At a high level, the approach we took towards solving scan-based workflow problems was not just technological, it was also our mindset. When Artomatix was founded, the industry was formatted around two types of workflows, manual and procedural. The manual workflow is of course centered around human labor doing everything from scratch and the procedural workflow meant running sets of custom programs that would make specific niche pieces of art, so basically, the industry had a people and programs mindset. At Artomatix we’ve always been more interested in the art itself so we started looking ahead to a new third type of workflow called “Example-Based”. Our concern was that once scan-based workflows became commonplace, artists would spend more and more of their time doing clean-up and less time doing real creation.

The philosophy behind an Example-Based workflow is that the user starts from raw examples. They give the computer (1) data and (2) high-level instructions on what to do with that data. Using the Artomatix approach, the computer grooms extrapolates and ideates upon example data, so the artist provides the high-level sophisticated creativity and the computer contributes the low-level mechanistic creativity. It’s a perfect symbiosis between artist and tool, where they empower each other. As for the technical features, we’ve filed over 200 pages of patent documents to date, so it’s hard to summarize all that in a paragraph or two. I’ll just say that we chose Artificial Intelligence as a means of building our own vision for an example-based future and I’ll go a little deeper into what that means later in the article.

Working with the scans

Here we need to explain that this is a service and that it does everything itself. Without any manual input. We actually learned more about this problem and ended up solving it by working directly with some of the pioneers for a scan-based workflow. Given a material, Artomatix can mutate it and grow (or shrink) it to any size. As a result, we can always synthesize a texture with the desired resolution and DPI relative to the real world. Within the software, there’s a simple ruler widget that allows you to specify the input size of an element within your texture. You may then specify the output measurement and Artomatix will constrain the resolution and DPI to accurately match.

In addition, Artomatix has a feature called “ignore masking” where artists can mask out a region of the input they don’t want to appear in the final output. This feature can be used in parallel with the ruler widget to achieve perfectly physically accurate real-world scale. All the user has to do is place an actual ruler in the scene before they scan it. When they bring the scan into Artomatix, they can use the scanned ruler as a reference when using the ruler widget, taking any guesswork out of the process. To remove the ruler from the scan, just draw a quick ignore mask over it.

The algorithms

Under the hood, much of Artomatix is based on principles taken from information theory, or in other words, tracking the signal-to-noise ratio among different regions of a texture. We’ve spent a lot of time thinking about what makes a texture a texture and we’ve come up with a few simple guidelines.

(1) The simplest texture is homogeneous, this means that if we were to look at two small patches of maybe 30×30 pixels anywhere in the texture, they’ll look like the same texture, e.g. a painted wall.

(2) More sophisticated textures are heterogeneous, this means that they’re made up of two or more homogeneous textures mixed together in some layout, e.g. a painted wall with some rust spots.

(3) Textures have a scale element that needs to be considered as well. Sticking with our rusty painted wall. At a very zoomed in view, our texture could be only painted cracks or rust depending on where we zoomed in. As we zoom out, we have two separate textures, rust, and paint. As we zoom out more, the distribution of paint and rust itself becomes a homogeneous texture. Basically, it all boils down to recognizing similar features and elements across an image and how the degree of recognition changes across space and scale.

When we say recognition, we’re talking about using neural networks to convert images into a set of feature vectors or “features” for short. These are points in high dimensional space. By statistically analyzing the relationships among these points, we can detect and modify certain properties of a texture. For example, if there’s too much variance in our features and that variance is larger for points far away in the image than for points closer, we can assume that’s probably due to gradients. Alternatively, if the variance is too small then we’ll probably see lots of feature vectors clustering at or around a single point during synthesis, then we know that we’ve got repeating artifacts. When we do gradient removal we want to contract our feature space, when we do repeat removal, we want to expand our feature space. It all kind of boils down to optimizing the statistics. Things get more complicated when you take into account enforcing strict symmetry patterns and user drawing constraints, but at a high level, this is the underlying theory that glues it all together.

Turning photorealistic into stylized 

The Style Transfer problem is basically one of converting two images, a style and content image into the same feature space and then combining the two statistically so that the course features more closely match those from the content image and the finer features more closely match those from the style image. Our goal is to help our users take scanned data and change its style to fit any artistic direction they might want to take. Style Transfer is a tricky problem made extra tricky by the high computational demands of the process. We haven’t just been focused on making Style Transfer work well but also making it work fast. We’re breaking new ground every day in game development with this technology so we’re excited to bring this feature to market. We’ve built this based on real-world input from artists in the field, and incredible demand.

Join discussion

Comments 4

  • Another_CG_Artist

    I think this has some amazing applications! I love your tech and approach to the problems that your tackling. I see the application for this tech and its potential. Keep going guys, get demo's out there for amazing artist to do to get people to believe in your product. As FrogPride said the only real way you are going to get this on the fast track of breaking into the industry is to get arts to start playing with your tech and making proof art. When you can get big name artist to start flooding artstation.com with work using YOUR software then YOUR name will start traveling. After that Directiors, VFX Sups, CG Sups, VFX House owner will start looking at you as a real player.

    You are addressing real bottle neck cost problems in the current pipe line in a way that could save a lot of time / money. Now you need to get people with notoriety to try it. Arts love to make art... As a cg artist if I was given a tool that will let me make amazing art in less time I would jump on learning how to use it. It's also a chance to stress test your UI design and see if you made a tool where the end user is a artist or a programmer. Zbrush is amazing but the interface is god awful... Mari is amazing but the interface sucks balls.... There is a reason people hover to Apple use vs Windows when it comes to user experience.

    0

    Another_CG_Artist

    ·4 years ago·
  • Stephen

    Looks very interesting ! And for the other commenters - Texture synthesis has been around a while and been used in a bunch of animated features but has yet to be made available for a mass market. So dismissing it just because the pictures show mostly proof of concepts is quite dismissive. If anything it's something to be quite excited about. There is no modern tool available that does this kind of stuff currently for a mass market.

    0

    Stephen

    ·6 years ago·
  • Max

    is non-commercial personal use demo available?

    0

    Max

    ·6 years ago·
  • P.G.

    Very poorly designed paper. Everyone knows, that the only purpose of AI in texturing pipeline is to make Diablo II HD remake. And there is no even upscaled Deckard Cain in the hole post.

    0

    P.G.

    ·6 years ago·

You might also like

We need your consent

We use cookies on this website to make your browsing experience better. By using the site you agree to our use of cookies.Learn more