3d artist and a 3d-scanning genius Christoph Schindelar talked about photogrammetry and real-displacement-textures. It’s out of this world.
Hi, my name is Christoph Schindelar. I’m a professional 3D-Artist (generalist), trainer and developer of the real-displacement-textures. Actually I live and work in Vienna / Austria, where I enjoy the city and the good network we have here.
After playing around with those game-engines for some years, I turned my attention to Blender, 3ds Max and finally in 2004, when I started studying at the local SAE, I discovered Cinema4D, which remains to be the software I’m mainly working with to this very day. In 2007, parallel to my work as audio-engineer for local TV-studios, I started working on my first professional CG-projects. Since then I have worked for many local companies, mostly as independent freelancer, but also as part of teams on various assignments.
Furthermore I work as a trainer for CG-Shop, which pushes me forward more than anything else, because I’m forced to describe WHY things work the way they do.
The Production of High-Quality 3D Models
There are certain plugins and tools that provide excellent solutions for specific tasks, but it’s not only that. Different artist have different preferences.
My standard setup involves Cinema4D and Octane Render, which is used for about 80% of my work. These two are both extremely fast. C4D has one of the best structured layouts I know, which speeds up my working-process immensely and Octane Render, in combination with good nvidia graphic-cards, is maybe the fastest physically based render-engine on the market. If you combine these two with HDR-Lightstudio you get an amazing realtime product-design-studio.
Most of the time I try to rely on plugins within my main host, because this allows me to interact with all available parameters of the host-program. For example trees moving on collisions. For this approach I use DepitPlants. But if I just need a still plant, I’d use ThePlantFactory (e-on software) or SpeedTree, because they are much more versatile (with the disadvantage, that it’s not easy to create any interaction with simulations inside my host.) One tip: modeling artists should definitely look into voxel-engines like the one in 3D Coat. They give you the freedom of polygonfree workflows. You can easily add and subtract volumes, without caring about the polyflow and in addition you get an automatic-retopo tool for finalizing and export of your model. 3D-Coat also offers a painter, but in my opinion Quixel is better for painting. It works as plugin directly inside photoshop and allows you to use all the standard photoshop features and more.
For cloth I recommend Marvelous Designer, for fluids RealFlow (now available as C4D Plugin!), for Sculpting 3D-Coat, for hard surface modelling or archviz I use Rhino and Cinema4D, for scattering CarbonScatter or Vue, for landscaping World Machine, for rendering Octane Render, for tracking Syntheyes, for texture/layer production Substance, for compositing AfterEffects or Nuke, for realtime-stuff Unreal or Unity…
There are a lot more tools I use in my standard workflow, some of them even custom coded. I simply love playing with new features and programs. It’s the combination of these features and thinking outside of the box that often produces unexpected and magnificent results.
The knowledge of physical principals helps a lot!! You should know how photons react with different structures. To setup a base-material, I use 1-2 backlights (for reflections) and some kind of ambient light (for orientation). The look of the material is strongly dependent on the lightsources, so I try to use natural light-values and keep it simple for good handling.
First I import the color (diffuse) to see what I’ve got.
Then I set the color to black (reflections are most apparent on black background) and tweak the roughness while I have the glossiness on 100%.
Make sure you start with the roughness (defines how blurry a reflection is), because the glossiness masks out a lot of information. Now I tweak the glossiness to define how strong reflections are (per pixel).
After that, I bring the color-map back and start tweaking again while moving the light. Octane Render works awesome for this task, same as Substance or Quixel.
Now I’ve got good physically oriented base material, where I can start to setup bumps or normal, displacements, parametric features like dirt or worn edges and much more.
I originally created the textures for the sole purpose of using them in my own productions. As mentioned before the combination of various tools led to the results.
The beginning of RDT: One of my favorite fields is modeling landscapes, which requires a tremendous amount of scattering. Since I also work with tracking and photogrammetry I noticed I could use surface scans with displacements as a foundation (f.e. gravel) instead of scattered objects. Then I started to develop a workflow to make these scans tileable. First I experimented with 123D-catch by Autodesk, but soon switched to Agisoft Photoscan, which allows you to tweak every parameter, but it’s biggest advantage is the possibility to align several different scans to produce one big one. The main disadvantage that comes with photoscan is it eats up a lot of memory (sometimes more than 100GB). One last tip: The quality of your scans is highly dependent on the quality of your photographs. At the moment we use 42,5MP cameras with 35mm fixed lenses.
My main motivation is to provide textures at maximum quality, no matter how difficult the development process is, at a fair price. Since I’m an artist myself I understand the necessities of the product. I think, at the moment, we are the only one providing 8K textures with a “real” scanned 16bit depth-map including roughness-, glossiness-, high-gloss-, AO-, bump-, normal- and a nice delighted diffuse albedo-map. This makes our textures very flexible in use.
Within my productions as an artist, RDTs have saved me from countless hours of work and delivered extremely realistic results.
Another amazing feature of RDTs, is their ability to use them as brushes for sculpting and texture-painting. (f.e. SubstancePainter allows you to use depth, color and roughness at the same time on one brush).
One of the main development goals of the computer industry is realism. Right now there are new developments within multiple divisions such as texture streaming solutions (f.e. by Granite or Amplify-Textures) enabling you to process the huge amount of information provided by photogrammetry. At the same time shading quality of engines is rapidly improving – nearly all engines support PBR.
Also VR-solutions are getting more and more popular. This makes future animations so realistic they become more than simple games. Imagine students in the future could experience history instead of simply learning about it. It would allow them to travel to historic places, even those that don’t exist anymore. It can be used for therapeutic methods f.e. to fight a fear of heights and more. The possibilities are endless.
I see myself as a small part within this development.
To answer your last question, no they’re not too heavy, with our set of layers, you have the possibility to tweak materials for nearly all engines. You can start with the simplest setup, just using the color and normal-map in 4k and it will look great and run even on a smartphone, but you can also go as far as using PBR including displacement to reconstruct the surface and make it look like a very high detailed photo-realistic object. We’ve got demos online – feel free to check them out – like the new UE4 demo.
The Future of Photogrammetry
Everyone interested in realism, has an eye on photogrammetry. (By the way, this technique was first announced in 1860 in a German architecture-paper – finally it’s Pythagoras but nowadays combined with modern optical tracking algorithms) Consequently we are digitalizing the world around us.
In my opinion, at the moment, good photogrammetry scans deliver better results than many of the highly expensive laser-scanners. Take a look at modern full-body-scanners. They also have the advantage to capture a whole object within less than a second.
Maybe if laser scanning gets more affordable, we will combine these two techniques in the future. Meaning you scan the surface depth with the laser (that isn’t affected by transparency) and add images with high-resolution made with photo-cameras (in Agisoft you can match both).
I think, one of the reasons why photo-scanned objects look so good, are very small micro shadows baked into the diffuse map. These shadows are so small that a normal engine would not be able to calculate it in reasonable time, but all together they make the difference. What’s the advantage of tileable scanned materials? You can map them onto any convenient geometry and it will look as if it was scanned.
I do not think texture-artists fear the loss of their jobs, it’s the opposite, I see RDTs as an extension of their arsenal. Many of my clients are texture artists and seem to be highly content with the product. By the way we are partners of Allegorithmic (Substance).
Right now we are working on over 60 new scans of forest, floors, barks, roots, moss, leaves and more, with incredible details and are aiming to release them this Year. We also had the idea to develop a scan-robot, that does the photo-job fully automatically and should be available at low cost. I’m very happy about my recent success. It’s the best motivation to go on with the RDT- project.