your shader complexity is low because you used true polygon models instead of just a masked plane to prevent alpha overdraw?
I love the way you write this information because it helps a lot along with I shared this information with my friend and kept sharing such useful information for the further gain targeted audience to improve your bounce rate. Fantastic way to clean unwanted files from your mobile using cleaning master you can also search by "ITL Phone Cleaner- Antivirus & Speed Booster" name. https://play.google.com/store/apps/details?id=com.falcon.clean.phone.master.junk.cleaner.battery.booster
Thanks for the information your article brings. I see the novelty of your writing, I will share it for everyone to read together. I look forward to reading many articles from you. windows 10 pro product key
Sébastien Van Elverdinghe discussed his approach to the creation of awesome 3d rocks by making a bunch of photos. For a more detailed look, check out his tutorial here gumroad.com/sebvhe.
Hi, I’m Sébastien, I am from Brussels, Belgium, home of the best fries, beers and chocolate! I am currently working as an environment artist at Starbreeze Studios, Stockholm, Sweden. I have previously worked at Playground Games on Forza Horizon 3 which was for me a great opportunity to implement photogrammetry into a AAA game.
I have been playing a lot with photogrammetry over the last 4 years. I quickly started focusing on textures and materials which were pretty unheard of back then. About a year ago, I released a tutorial on How to Make Textures From Photogrammetry.
Even though my workflow changed a bit since the time of writing, it is still relevant to what I am going to say today. If you find anything unclear in this interview it is likely to be explained in details on my tutorial here: https://gumroad.com/sebvhe
My very first tileable scan in 2014 (left) and one of my latest rock scans (right)
It’s probably because people only think of photogrammetry as a way to get a static scanned mesh into a game. Whenever they try to make scanned environments, they always have to deal with massive 4-16k unique textures on unique very non-modular meshes. A good example of this would be the UE4 Kite demo. While this is fine for cinematics like Kite, I think it’s a pretty bad approach for realtime applications, especially on large surfaces.
This is something I have been focusing on a lot when working on my UE4 Marketplace Rock Texture Set. Let’s pretend you are doing a small rock canyon. Scanning a set of 4-8 large rocks that you try to put together as a canyon would be the wrong approach. You’ll easily end up with at the very least 4 unique 4k textures. Instead I’d focus on getting one very good tileable rock texture capturing really nice shapes in its heightmap. Then I would just create a basic canyon “enveloppe mesh” that I would displace using that texture I just made. That way I can have the whole scene using only one 4k texture for the rocks.
A quickly created canyon “envelope mesh” displaced in UE4 using one tileable rock texture and some snow for eye-candy.
It is totally possible to pre-displace your rocks in a 3D software using your height map and import an optimised mesh instead of using tessellation, this all comes down to how important environment is and the resources you have.
I believe this method, is much more memory friendly, incredibly faster than scanning several meshes, and allows for much quicker iterations. If you want to make it a desert canyon, just scan one desert rock texture.
Just imagine how many meshes you would need to scan just to change the biome, here it only takes one texture (plus the snow – sand – moss)
You have to be clever about hiding or removing seams though. My tessellation material allows for 100% seam removal.
In the end, photogrammetry can be very flexible, you just have to think outside of it’s main “rigid” use.
It is true that aside from a camera and a computer you don’t need anything. A good analogy would be that in order to play guitar you don’t need a 2000$ Gibson, a simple 100$ guitar could get you really far. In the end skills matter way more than gear. I actually have some the most basic gear when doing photogrammetry. I’d love to upgrade soon, but so far I think the constraint of bad gear helped me finding clever ways to improve poor scan quality.
All of my scanning work has been done using a Canon 100D/Rebel SL1, one of the cheapest DSLR on the market at ~400$.
That being said there are things you have to consider if you want to improve your scan quality.
Manual Exposure : While technically you could use a bad camera with only automatic settings, you’ll be really limited without manual exposure. You want all of your shots to share the same settings, otherwise your camera is going to compensate darker shots making it much harder for the software to align pictures and for you to remove lighting information. These days almost every camera / phone will allow for manual exposure.
A good SD card/writing speed : This may not be obvious at first but it can change your world. For a long time I had a slow speed SD card and had to wait about 5 seconds between each shots (once the camera’s buffer memory was full). 5 seconds between shots is a long time when taking hundreds of pictures.
RAW files : This is the number one reason I’d recommend a DSLR. Raw file format contains a lot more information and dynamic range than normal JPG (which furthermore contains compression artifacts). It allows you to preprocess pictures for better quality, color balance and vignette removal to name a few.
It also gives you the ability to tone down lighting information before processing scans making it easier to remove lighting information afterwards. RAW files can be huge, consider that when picking a SD card.
Now on to the gear I don’t really use :
Tripod : That’s right, I very rarely use a tripod, at least when shooting outdoor, this is a personal choice, I trade a large aperture for a fast shutter speed, more on that later. Don’t get me wrong, using a tripod will without any doubt improve your scan quality, but there is a drawback, it takes time, and a lot of it, to set up a tripod for each shot (even a few seconds per shot adds up to a long time total). When shooting outside, lighting conditions will rarely be ideal, so you usually want to be as fast as possible (without rushing) to avoid lighting changes over time while scanning. Again, if you do indoor scanning or have stable lighting condition, there is no reason not to use a tripod! Furthermore, you may find a monopod to be good middle ground between time spent and quality.
A color checker : There are no good reason not to use one other than because it can be rather expensive, about 100$ for the X-Rite Passport. A color checker ensures your pictures are correctly calibrated. I am going to buy one very soon.
Chrome balls and all the HDR delighting stuff : This is a waste of time and money in my opinion, at least for textures, it could be worth using for large 360 scans. By default if you scan textures, the surface you are capturing will pretty much face the same direction, making the lighting almost even throughout your scan. You can get rid of what’s left of the lighting in minutes in Photoshop rather than spend time dealing with HDRI de-lighting.
When it comes to computer rigs, if you have a standard gaming desktop, you won’t have issues when processing. I’d recommend to have at least 16Gb of Ram, and a decent video card. I’m using a GTX670 and 16Gb of Ram, pretty standard stuff. Memory is probably going to be your bottleneck.
Good lighting is the hardest thing to get, because when going outside you have no control over it, you have no choice but to wait until it’s good enough. Basically there are 2 things you want to avoid :
Rain : Whatever you are scanning, it has to be dry. Rain makes everything reflective / darker depending on the material.
Changing conditions : If you cannot get a good uniform cloudy sky, at least aim for something stable, even if sunny, in which case you’ll scan something in the shade. Moving clouds are to be avoided at all cost as they can change the luminosity dramatically in seconds.
This is pretty much the worst case scenario.
The target is a cloudy sky that is at the same time very uniform but still very bright. Don’t keep waiting too long though as you can be pretty much sure you won’t have it!
How many pictures should you usually have to make a nice material?
The number of pictures really depends on the software you are using (more on that in a minute), the size of the surface you are scanning and the level of detail you are aiming for. It’s always good to take too many shots than not enough.
Indeed, overlapping is crucial. Photogrammetry software work by matching features between shots to work out the spatial position of your pictures. Therefore you must have the same feature in several shots, the more the better.
So you want to scan that ground you just found, how to proceed? Where to start?
Remember when I said I could get rid of a tripod because I was trading a narrow aperture for a fast shutter speed? The reason I can get away with narrow depth of field without much blur is because I always shoot top down – be careful not to get your feet in the shot! Not only does it produces the best results but it also means your object is going to be pretty much at the same distance from your camera throughout your frame, reducing a lot depth of field blur. Of course that is if you scan a somewhat flat surface.
Source: Agisoft User Manual
Now on to overlapping : picture yourself as a cartographer for Google Earth, you first want to cover the whole earth, your whole subject in your first shots and then add different levels of detail. First take global shots, at least 8 of them, preferably more. These probably can’t be top down, don’t worry just turn around your subject but be careful with depth of field. Now you have all the features contained in your surface in every shot you took that the upcoming, more precise, shots can rely on for alignment.
Next, you want to create the first level of zoom : Take top down shots covering about 1/4th of your surface. Make sure these overlap by more than 50%. Start again covering 1/8th, 1/16th etc… It all depends on the level of detail you want to capture.
Working that way ensures that if your close up shots can’t align with their neighbours, they can still rely on the previous level of detail to properly align.
Of course in reality you do everything by eye, you don’t have to start bringing a ruler with you. Furthermore it might not work every time, for instance if you scan sand, you can’t step on it during the scanning process. But it’s good to keep the idea in mind at all times.
In this example, you have the overview shots in blue, the first top down level in green, and another detailed top down level in red.
There are two main competitors, Agisoft Photoscan and the rather new Reality Capture. I have mostly been using Photoscan but started using Reality Capture recently. I wouldn’t say one is much better than the others, but as of late, the industry seems to be shifting towards Reality Capture.
Here are my pros and cons for each software, please remember I am still relatively new to Reality Capture and could get a few things wrong.
Good Documentation and tutorials online
Insanely fast when processing, really you may not have the time to get coffee sometimes!
Good filtering tools to remove unwanted points after alignment
Doesn’t seem to create as many holes as Reality Capture
Can handle thousands of picture on a modest rig
Texture quality seems to be slightly better in Agisoft for me
I never had a crash yet, rare enough to be mentioned
The standalone license is pretty cheap at 179$, cheaper than Reality Capture in the long run.
Pretty slow and uses LOTS of RAM, your scan can be pretty limited in resolution if you don’t have a beast of a computer
Creates more holes, and needs slightly more manual alignment. Although, this could be down to my settings not being perfectly tuned yet.
You have to carefully pick your pictures because your computer probably can’t handle all of them.
Most parameters are very obscure and lack documentation and examples.
You probably won’t get super high resolution meshes for the same reasons.
Feels too much like a all-in-one-click solution. Great when it works, a pain when it doesn’t and you have to try to fix things.
Subscription system makes it more expensive in the long run (6+ months)
I’d probably advise beginners to start with Reality Capture, it will be more tolerant to badly taken shots, furthermore you can compensate bad quality shots with a good quantity of pictures that Agisoft would probably not handle on your computer.
However if you plan on using photogrammetry only from time to time, in the long run, it is probably cheaper to go for the standalone Agisoft licence.
An amazing super high detailed scan might just not look nice in your game without a good artistic eye beforehand.
The most important thing, and hardest of all is tiling. It is something you have to be aware of at all times, especially while scouting for a piece to scan. Many people ask me : “Isn’t there a faster way to tile textures and automate it, for instance using Substance?”
90% of the time no there isn’t.
I guess it all comes down to the belief that tiling is about removing seams. Well yes I guess it is, but that’s less than 10% of the tilling process. The real work is balancing frequencies and features throughout your surface and make sure that the texture will still hold when repeated many times. This is something that, in most cases, cannot properly be automated yet.
Maybe we should not call that “tiling” but “composing” instead.
Do not focus only on the seams, move things around in your texture, remove that large pebble that stands out too much, etc…
You cannot go from a stretch of rock like this (left) to a square tileable texture (right) by simply “removing the seams”
The way I tile my textures in Photoshop makes it quite easy to move things around compared to a 3D software, especially when you deal with 50M poly meshes.
In the end, I don’t really use ZBrush except when displacing my texture to bake all the maps I need. Maybe I’ll quickly fix a few artifacts but more often than not, I won’t do any work in ZBrush.
Delighting is definitely a big issue when it comes to scanning 3D objects. With textures… not so much actually. Most of the time it’s pretty quick and straightforward to fix it. The most important thing is to have flat lighting when scanning. When this is achieved, it’s just a matter of using your AO map to remove occlusion from your diffuse. Then probably fix a few spots by hand.
I almost never spend more than 30 minutes removing lighting information.
Normal maps are pretty straightforward, there are two things to consider though :
Avoid overhangs at all cost : This is something to be aware of even before taking pictures, you have to understand which area might not work fine when displaced on a plane and know how to fix it if you decide to shoot anyway. You can fix most of these issues by creating a custom low poly, as I explain in my tutorial. However, this is a sneaky problem because you may not notice it until very far in the process, I had a lot of stretching due to overhangs in my early scans
An example of stretching due to overhangs in one of my first scans. A custom low poly like I do now would have fixed it.
Don’t assume your scan will be detailed enough to give you a super sharp normal map. You will need to regenerate the high frequency details by converting you albedo to a normal map and overlay the fine details on top of your scan normals.
Why do you think using specular map is important for nailing the right kind of materials?
A very tricky question to end it!
Since PBR is widely used in games, artists have been pretty divided between those who want the system to be 100% accurate, and therefore state that you should never use a specular map and those who found out that it can make certain things look better albeit not technically correct. I’d say both are right, however I’d lean towards using a specular map in specific cases. Furthermore it is probably not as inaccurate as you think, let me explain :
When baking organic textures to a plane, you are oversimplifying extremely complex shapes, leaves, little pebbles etc… Have you ever realised how complex moss can be?
All these irregularities cannot be faked simply using a roughness and a normal map. The main reason is because all of these little details actually cast shadows on each other. Light gets sort of “trapped” in the complex pattern of moss for instance, it doesn’t just simply bounce back up.
Tessellation will definitely help bringing small cast shadows in, but it’s not enough for little details.
Shouldn’t that be handled by our AO map?
Yes, it probably should, but the issue with AO maps is they only really work in the shadowed parts of your mesh, leaving all the directly lit parts looking very flat and “plastic”.
Using a specular map is just an easy way to fake complex behaviour of complex organic shapes.
Without specular map (left), with specular map (right), notice how the dirt and very fine pebbles look less like plastic with a specular map.
That being said, you should be careful when using specular map. Always make sure you understand why you are doing it, what’s the logic behind it. Your need for a specular map may also depend greatly on the engine you are using.
Whenever using a specular map, I only input a fraction of it through a lerp to control the amount of specular I’m changing.
I always create a specular map for my textures because it only takes a few minutes, but in the end, I only use them for specific cases, most likely grass and moss.
Find more of my work on ArtStation!