Their website does say that you can pay per image at $1 per image. I am in the opposite boat though. I could see this having a very significant effect on photogrammetry but I would need to process a few thousand images at a time which would not be very feasible with their current pricing model
To the developers. A very promising piece of software for a VFX supervisor like me. BUT, please reconsider your pricing tiers and introduce a per-image price. We are a pretty large facility, but I can only imagine needing about 1-10 images a month at the very most. It's like HDRI's - we buy them all the time, one at a time. They need to be individually billed so a producer can charge them against a particular job.
Building materials has become much easier over the years. Using specialised tools helps to create absolutely amazing textures for any kind of surface. However there are still artists out there that prefer to hand-craft their textures. We were fortunate to talk with Brian Recktenwald from Naughty Dog, who talked about the way he builds his materials. Brian was kind to give some advice to artists and explained how you can use textures that look mighty impressive.
New Ways to Build the Materials
Since I mostly do modeling and scene assembly at Naughty Dog, I’m working on these material projects to learn and develop different skill sets at home. My overarching goal for these material studies is to create a high poly, mostly UVless workflow so I can rapidly model out scenes and scene assets and be able to quickly get photorealistic results by assigning these premade tileable materials. I picked the current batch of materials based on a few environments I’ve been thinking about during the production of Uncharted 4. I wanted to keep them generic enough that I could swap out different control maps and variations quickly so they can be used on multiple environments.
My process usually starts out by studying and finding as much reference for the material I’m trying to create. Studying how they reflect and break up reflections to me is paramount. For the model, I aim to model as much as possible and then use displacement maps to get that next level of detail. Anything organic I make almost always has several modeling passes in Zbrush. I then decimate and export to 3Ds Max where I do my material/texture work. For manmade assets, I still model everything out as much as possible and group the meshes by material. My philosophy is that if I model enough out up front and create good silhouettes, the tileable materials will be icing on the cake.
For combining the different materials I’m using separate meshes for each material. Since each of these shaders has a displacement map, I’m able to fade the transitions more naturally and actually use geometry to blend. By focusing on each core material I’m able to hone the look and physicality of each, creating a modular shader workflow. This allows me to mix and match all of these together without having to worry about creating 1:1 materials/textures. The main thing to remember when building these shaders is to hone the physicality of the shader in a neutral lighting environment while focusing on breaking down each core component into a separate shader that can be combined or blended with other shaders.
The UVless workflow I’m using is centered around hipoly modeling and rapid rendering. Previously before Vray 3.3, I was using a combination of box mapping and blended box mapping techniques by Neil Blevins on the meshes to achieve the majority of the uvless workflow. But now that Vray supports triplanar projection, I’ve converted all of my materials to use that. This allows fairly seamless blending and minimizes artifacts, similar to how Keyshot handles its materials. I also use Mari to paint black and white masks with ptex, which is by nature UVless
to blend between various materials. By modeling everything out and separating as many elements as possible by material in mesh, I’m removing the need to paint 1:1 textures or do any baking. Programs like Substance Designer and Quixel are fantastic for game models where there’s a lot of
details baked down and separated by object ID masks, but my goal here is for photorealistic high poly work.
The bakedown process also takes quite a while, requiring the creation of a highpoly and a lowpoly model where the best method is still currently retopologizing the high poly by hand to create the lowpoly. However, a lot of the tileable control textures can be ported over to a game mesh and used with Substance or Quixel.
Currently to create these tillable materials, I’m using Allegorithmic’s Bitmap2Material since my goal is photorealistic and I have photographed a large library of textures. However, I’m interested in switching over to Substance Designer as soon as possible for added flexibility.
Building Great Materials
I feel the most important thing to remember about the production of materials is to really study and break every single element down as much as possible. For example, a simple rock material could be broken down into a rocky stone material, a smooth eroded material, a dirt material, a reflective mineral material, a mud material, a moss material, a lichen material, etc. Creating all those separate elements and blending them together in the shader (using ptex masks and procedural ways) saves a lot of time and is more reusable down the road. To make all of these materials work together, it is important to create them all together in the right scale and in a consistent neutral lighting environment. If I hone them all in the same space together, they should require only minor tweaks to work in other lighting environments.
Remember the Lighting
I take into account lighting and the environment it’s reflecting by using a neutral lighting environment. My main neutral lighting environment is currently a simple studio setup with a curved grey backdrop and one area light with a black background. I also like to test them in sunlight and night light environments if it’s a challenging material. I also use the physical Vray camera for more realistic exposure control along with a nonlinear workflow that keep things as flexible as possible after rendering.