This is a great post on star wars. Our cheap assignment experts are really impressed. https://www.dreamassignment.com/cheap-assignment-help
Thanks for sharing, the lighting on the wheels and coins is beautiful, very painterly.
The site is in Japanese, but the program was in English for me.
James Candy is actually a filmmaker, however on his popular YouTube channel Classy Dog Films he devotes a lot of time and attention to 3d scanning. 80.lv talked with James about his approach to photogrammetry and quality materials production. Check it out, if you’re willing to give 3d scanning a chance.
My name is James, I live in the United States, and I create short films and tutorials for my YouTube channel, Classy Dog films. My goal is to one day create feature films, so I consider myself more of a generalist when it comes to CG. My “Guide to PhotoScan” series of videos has become relatively popular, and I plan to continue it with new installments this year. I’m also working on my own line of scanned textures which I hope will rival the quality of anything put out so far.
I started using PhotoScan because I was (and still am) just God-awful at modeling, and I needed a way to create convincing models for film projects. So I downloaded the demo and start practicing with it. I think 3D scanning is especially relevant right now, because there is so much that can be done with scan data. Photogrammetry is cheap to implement and easy to learn. Because the software has advanced so much in the past decade (as well as the cameras we use), it is now insanely easy to get high resolution scans in very little time. As long as you shoot quality photos, the software does most of the heavy lifting for you. So we’re taking a skill that most of us already possess (taking pictures), and just learning how to do it a little differently to get better results. We don’t have to start with a completely new skill that is foreign to the average person, such as poly modeling or sculpting.
I’ve used PhotoScan for all my photogrammetry work so far. The workflow is so simple, I don’t think I even needed to read the instructions the first time I opened it up. You just select the images you want to work with, align them, generate the dense point cloud, generate the mesh and textures, and export. That might sound like a lot of work, but you’re pretty much just hitting “okay” and telling the program to do it. I think PhotoScan is great for beginners to start with because the UI and workflow is so easy to learn, and of course the quality it generates can be very high. Even though it’s a fairly simple program to get started with, you still have a lot of control over the settings at each stage to guide the scan where you want it.
Zbrush is what I see being used most often to polish up scans. There’s a lot you can do with it, and thankfully it’s able to import huge data sets. Any kind of sculpting tool will probably be the easiest way to clean up a messy scan, but you just want to make sure it can handle the number of polygons you’re going to be throwing at it. If I have a small scan that’s only a few million faces, I can use my beloved Blender to do the work. But if it’s closer to 10 or 20 million faces, I know I’ll have to go about it differently.
Clean up usually just consists of smoothing out noisy areas, or filling in holes where there was no data. Once you have the scan cleaned up, the workflow is much the same as it is for high poly sculpting; generate the lower-poly mesh(es) and bake out your maps.
One of the extra uses of scan data is material creation. Instead of starting with a flat 2D image to generate textures from, we can use our glorious 3D data to create materials with high accuracy and high resolution. While procedural material software like Substance Designer can produce very convincing results, there are still a lot of subjects that are difficult or impossible to emulate convincingly without some kind of real-world input, and that’s where scanning excels.
Christoph Schindelar has used this approach in his stunning Real Displacement Textures. You’re also starting to see if pop up in other places. CG Textures (now just Textures) even has a new category just for 3D Scans! Essentially, you just take your scan data and bake the high poly down onto a plane (or from render passes looking through an orthographic camera). The mesh data gives you normal, height and even vector displacement textures. Depending on your camera rig, you can also approximate reflection maps. Of course, we’re still artists, so you can add textures of your own to create the look you want, but the scan data gives you a really strong starting point so you don’t have to mess with settings in B2M to try to fake it.
The amount you can modify a scan texture is going to depend on what program you’re in, but there are a lot of possibilities. For instance, the normal map could be referenced to create moss on one side of a field of rocks, or to add puddles to the low parts of a parking lot or muddy field. Similarly, the height map could control where an effect is added to the mesh, like adding grime into deep crevices, or even act as an alternative to a texture mask.
Tips and Tricks
Sometimes when you’re shooting with a ring light, the light falloff may not be uniform. This won’t hurt your mesh, but if you use the default “Mosaic” texture blending, the falloff will be obvious in the texture. To get around this, change the blending to “Max Intensity”. Now PhotoScan will use the brightest parts of the images to generate the texture. As long as you shoot with enough overlap, this will fix any lighting issues.
When exporting your textures, be aware that PhotoScan only pads the UV islands if you export a JPG. For whatever reason, PNG and TIFF files don’t get this padding. Without it, you may notice seams in your model when you go to render. So either export JPGs, or if you want you can export both a JPG and a higher quality format, layer them in Photoshop and save out the file again. I have no idea why PhotoScan only pads JPGs, but it’s something to be aware of.
One trick I have seen a lot of people using to optimize memory is to combine black and white texture maps. Say you have a gloss map, a reflection map and an AO map. Since they’re black and white, they don’t need all three channels of RGB; they only need one channel. So in Photoshop you can copy one into the Red, one into the Green and one into the Blue. Now you’ve reduced the number of textures needed by two! And if you’re working with an image format that also has an alpha channel, you can cram four textures in there. Any black and white map can be used, but if you want to include the height map, be sure that the bit-depth is high enough to prevent stairstepping. And of course make sure your render engine or game engine has a way to separate out individual channels first!
For the scanning itself, I recommend either Agisoft PhotoScan or CapturingReality’s RealityCapture. If you’re just starting out, I would lean towards PhotoScan, as it’s so easy to use and there’s a ton of help out there for it. If you need to capture insane detail or will be working with large data sets, say over 300-400 images, RealityCapture is probably the better bet. Both programs can produce very high quality results.
For the post-scan work, Zbrush or Mudbox will handle clean up easily. You can generate most maps with whatever 3D program you’re comfortable with, but a lot of people also use xNormal for their normal maps. And of course good old Photoshop to round it out.
Scan data helps in our quest to make our work more photorealistic, by reducing the learning curve and providing accurate results while requiring less work than modeling and sculpting. I think we’ll continue to find more uses for scan data, as well as new tricks to increase the amount we can capture. There are thousands of nutbags all over the world coming up with new ideas for scanning all the time. What the camera did for images, 3D scanning will do for CG.
When we compare scanning to traditional modeling and sculpting, I think there are some important things to be aware of if you’re looking at the long term picture. First off, we can only scan what exists and what we can get close enough to, right? So while maybe we could scan the scales of a lizard or a snake, we can’t scan a dragon (unless somebody sculpted one, but that goes back to traditional modeling). So if it’s something that doesn’t really exist, or is a lot different from what is normal (say comparing a car from Mad Max to a normal car), then stuff like that has to be made by hand with love. That’s where traditional skills and imagination is required, and scanning will never replace those things; at best it will aid them.
On the other end of the spectrum, we have models that are supposed to look as real as possible, and are things that actually exist. A chainsaw, a tire, a shoe. Real people. This is the stuff where scanning, when it’s done right, is very hard or impossible to beat with traditional modeling. So if you’re a modeler or a sculptor who’s working in the same realm of subjects as the scanners, you have to ask yourself if the work you’re doing is able to compete? Does it look real enough, is the turn around time fast enough, do your models bring something to the table that scanning doesn’t? Scanning is becoming so prevalent in our work now that the threshold of what we consider “high quality” when it comes to photorealism is rising faster. So if you’re in that middle area, where you’re a traditional modeler/sculptor working on stuff you see in the everyday world, you can absolutely still thrive and produce great work compared to the scanners; but you’ll want to ensure you’re meeting or exceeding the quality people expect now.
And truthfully, I don’t think there’s any reason to worry. Scanning is easy to learn, and it’s just another tool in our kit. If I need to make a kitchen scene, I’m not going to go out and scan all the utensils you find in a kitchen. There’s a better way to do it, just model it. We all have different strengths and weakness when it comes to building things, so find the tools that work for you. Even if you like to model everything by hand, you may find scan data useful as reference. In the same way that the camera didn’t kill painting and drawing, scanning won’t kill modeling and sculpting.
Not to mention, we need all you modelers to retopo our huge scans!