There are AwesomeBump that is written in QT and do not require .NET Framework and has code open.
Интересно, не понятно зачем, но круто. Я бы хотел поучаствовать в проекте
Already have ndo, b2m, knald, and others.. Why another one?
Olivier Lau shared with us an extensive research on three photogrammetry solutions, (PhotoScan Standard, Reality Capture, and Zephyr Lite), in which he reviewed their peculiarities, the output, time processing, and other important points.
The Goals of the Research
I have been using PhotoScan Standard, Reality Capture (subscription version) and Zephyr Lite over the past few months and wanted to dig into their specificities, determine which solution was appropriate for a particular subject, what type of output quality I could expect and how much time processing would take with my hardware for production of game engine/CG-friendly assets. Reviewing photogrammetry solutions is not an easy task, each software has a specific workflow and items are not always one to one comparable. Result analysis can also be subjective, and even with the same software, an object will be processed better with a specific set of settings while another will require a different setup. Testing various combinations with high-quality settings also takes a significant amount of time, reducing the number of subjects that can be studied. I, however, tried to deduce some peculiarities and behaviors which hopefully can shed some light on the capabilities of each solution for a given situation. Even though solutions may perform differently in certain areas, every software reviewed here is capable of producing excellent results and their capabilities are often complementary.
The following licensed photogrammetry software solutions have been used:
- RealityCapture 184.108.40.20687 / 220.127.116.1181 (subscription versions) (18.104.22.16887 was used for all tests unless 22.214.171.12481 is specified)
- Zephyr Lite 4.008 and 4.009
- PhotoScan Standard 1.4.3 and 1.4.4
They are priced under $200 with permanent license except for RealityCapture which is subscription based for this price tag. In this article, for readability reasons, RealityCapture may be abbreviated RC and PhotoScan PS.
Scope & Workflow
Testing has been made along the workflow I am currently using to create assets, for which the photogrammetry software is producing the following:
- a high poly mesh used to bake geometry details (normal map etc.) optionally with vertex colors for cases where the color map bake is made from vertex colors.
- a 2MP (million polygons) medium resolution mesh serving as a base to create a low poly mesh in external software.
- optionally a color map texture for the case where the final color map is a transfer from the photogrammetry software color map to the low poly mesh geometry.
This review is not exhaustive in terms of analysis of the features supported by the photogrammetry solutions, it is only designed to cover the elements needed to perform the workflow I am using.
The final baked maps are of 8192×8192 (8K) resolution. The purpose was to try to push the photogrammetry solutions into a high level of details, hence this choice of resolution.
Products of the photogrammetry solutions are processed with external software:
- ZBrush is used to remesh the 2MP mesh into a low poly mesh (usually less than 20KP), Blender to clean up the low poly mesh when needed. For surfaces, a plane3D is used as a low poly mesh.
- RizomUV is used to create UVs for the low poly mesh.
- Knald is used to bake the color map from vertex color and the normal map (with 16xAA). Normal maps are reversed in Y.
- Substance Designer is used to transfer the color map (texture) issued from the photogrammetry software to the low poly mesh geometry using the Transferred texture from Mesh baker (with 8xAA).
A single low poly mesh is generated usually from the Zephyr medium mesh. It is then manually scaled and positioned to bake meshes for the other photogrammetry solutions. This is to make sure all bakes are made along the same low poly geometry and UVs. In an ideal scenario though, the high poly mesh of a given photogrammetry software would be used to derive the low poly mesh so manual alignment would not be required. Since the latter is not always perfect, potential holes in the textures due to alignment issues are not being considered in the appreciation criteria.
Normal maps are used to test detailed geometry production. Color maps are appreciated on their sharpness mainly. Color distribution is not considered as there was no significant difference in this area between color map bakes for the tested series (may be partly due to pre-processing of the photos to uniformize their lighting and coloring). Each solution provides tools to work on color restitution and these are worth checking. This review is neither using a reference to compare results to nor it is using numerical data to determine a quality factor. You will not find such terms as “accuracy” here since these require a reference. Comments are made on features exhibited by the maps as visually perceived, and appreciations may sometimes be subjective. I am however providing 1:1 extracts of the baked normal and color maps so readers can check by themselves.
Discussion on normal and color maps are made on extracts of the full maps, chosen in areas that looked representative for each solution. One or more 512×512 areas are taken from the original maps and showed at 1:1 scale in dedicated illustration. It would have been worth considering several areas exhibiting different features, however, this has not been done due to time constraints. Other appreciation criteria are processing speed, user interface (UI) and the ability to parameterize/tweak settings.
Hardware Used & Photo Processing
Photographs have been shot with a Canon EOS 6D in RAW format (resolution 5472×3748) and processed into DxO Photolab for white balance correction, denoising, lighting uniformity etc. then fed as 8-bit TIFF images to the photogrammetry software. No lens distortion correction was applied to the original photos.
Software solutions were run on a Windows 10 PC with 64GB of RAM, Core i7-6700 4-core 3.4Ghz CPU, GTX 1070 8GB GPU. GPU processing was enabled in every photogrammetry software.
Two 3D objects and one surface have been tested. The photo sets are not all “perfect” so sturdiness of the solutions can be tested.
This review is based on the following objects:
- “Boulder_8”: a large boulder, series of 107 photos shot handheld at 50mm f/8 ISO 800. Overlap is medium to high, most photos contain depth-of-field (DoF) blur but sharp areas. A few photos have motion blur.
- “Rock_2”: a rocky surface, 71 photos shot on a tripod at 35mm f/10 ISO 100, lots of overlap, a few DoF blur, no motion blur.
- “GiantPlant_5”: leaves of a large plant (about 1.5m wide) shot handheld, 117 photos at 105mm f/8 ISO 400. Medium overlap, DoF blur on several photos, a few motion blur on the underside of leaves (not considered for map generation, only the top surface was kept), some specular reflection. This object combines thin, relatively smooth surface and some visible reflexions.
This set of objects is far from being exhaustive and even though they cover a number of features such as 3D and planar surface, hard and smooth texture, thick and thin object, partially reflective and non-reflective surface, good and medium photo quality set, testing on more objects would definitely be useful.
Photogrammetry Software Configuration
Each photogrammetry software has its own set of settings, however, some operations are common. For each photo set, after photo-alignment, the bounding box was used to focus computations to the desired portion of the point cloud where the object was located. Every solution features a texturing functionality, which was used for comparison with a color map baked from vertex colors. Texturing inside the photogrammetry software requires unwrapping of a mesh for which the default unwrap method was used. The unwrapped mesh is usually a lower poly count mesh, so the 2MP mesh was used. For testing purpose, an unwrap and texture generation was also performed on a higher poly count mesh about 10 times denser. This was purely to check whether texturing was performed based on mesh information only (in such a case the resulting textures would differ in sharpness) or related to other data such as the actual photos (textures would then be mostly identical). Texturing was also made in 16K since all solutions support this resolution. This helps in two ways: one may want to process this 16K texture (such as sharpening it) before performing the transfer bake, and the latter may alter details less if being of higher resolution than the 8K destination.
Zephyr has an option to sharpen the texture on generation but this was turned off to make a fair comparison with other solutions. Zephyr also has a feature during texturing consisting in avoiding blurred areas and this option was enabled.
Since RC does not let the user determine a target polygon count for the reconstructed mesh, determination of the high poly count to be used for all solutions was made from the RC generated a high poly mesh in High Details mode with the default detail decimation factor (ddf) of 0.75. Other software solutions were then set up to match the resulting polygon count.
Results are presented in the form of graphs and baked map extracts. Graph nodes represent operations in the photogrammetry software, they are read from left to right and form several paths up to the circular termination node. Paths are to be seen as alternatives, they are the different scenarios which have been tested for a given photo set with a given photogrammetry software. Each node has a color representing a duration range in hours/minutes whose scale is presented at the bottom-left of the graph. The color of the terminal node of each path represents the duration range of the whole path.
An extract preview of the baked normal and/or color map is shown at the end of each path. When a color map is associated with a normal map, this is a bake from vertex colors. A color map alone is a bake from texture transfer (texture baked by the photogrammetry software then transferred to the low poly mesh geometry). The previews are not at 1:1 scale, they are resized to 200×200 (extracts being 512×512) and provided for convenience. Separate illustrations are provided with 1:1 extracts to review textures.
Paths outlined in white are “optimal” paths. An optimal path represents a trade-off between perceived quality and processing speed. This choice is highly contextual/subjective and may vary depending on needs and hardware used.
“Boulder_8” Series – RealityCapture
The RealityCapture (RC) workflow consists of photo-alignment followed by mesh reconstruction. Unlike other solutions, there is no visible dense cloud generation step. The Normal Details (photos at 50% resolution) and High Details (100% resolution) paths were used. In the latter case, default noise factor (dnf) and low-texture noise factor (ltnf) (parameters from the Advanced settings) were divided by 2 in order to potentially increase geometry details/noise. Color maps were generated from vertex color for each case as well as two texture transfers, one from the 2MP mesh, one from a denser mesh (for Normal Details, the high poly 25MP mesh was used; for High Details, a 20MP mesh was generated by the decimation of the 164MP mesh).
In terms of processing speed the graph shows RC being relatively fast, no path goes beyond the 2-3 hours range.
Normal Maps – RC
Normal maps show with no surprise a better definition in High Details compared to Normal Details. Looking at the High Details map, we can see the large-scale relief is sharp with a distinct differentiation between heights, flat areas have a low amount of details. The normal map with modified dnf/ltnf looks almost identical to the one with unmodified values.
Color Maps – RC
Bakes from vertex colors show a better definition in the High Details mode than the Normal Detail mode, which is expected as the vertex density is higher in High Details.
In High Details mode, unlike for normal maps, the bake with modified dnf/ltnf values looks significantly different (actually fuzzier) than the one with unmodified values. Otherwise, bakes from vertex color and texture transfer are almost identical, the one from vertex color is maybe a bit more detailed. Bakes from texture transfers do not exhibit major differences regardless of the mode chosen or resolution of the textured mesh. Therefore, as long as we are using the texture transfer path, High Details and Normal Details mode are relatively equivalent in terms of the color map. The choice between the two will mostly be driven by the geometry resolution we want to obtain as shown above with normal maps.
Optimal Path – RC
The optimal path (nodes circled in white) chosen here is the High Details one as, even though the color map is equivalent to the Normal Details one, the normal map is better defined in High Details and processing time remains reasonable.
“Boulder_8” Series – Zephyr Lite
Zephyr has lots of configurable parameters so the graph is a bit bigger as I wanted to test several options. It begins with photo quality detection which is very useful at searching for photos which have motion blur. The quality test does not differentiate motion blur from DoF blur though, so careful before discarding a low score photo. For the purpose of the test, I left all the photos in.
After the initial alignment/sparse cloud generation, Zephyr generates a dense cloud. The resolution at which photos will be processed is configurable through a percentage. To make things equivalent to other software, I chose 50% (half size) and 100% (full size) paths (but any percentage may be chosen).
For this photo set, the number of points in the dense cloud is very similar regardless of the photo resolution I chose, which was quite unexpected. It is worth noting that dense clouds can be densified (and meshes too), and while Zephyr separates such processing from the main dense cloud generation, an assumption is other solutions may combine them resulting in denser point clouds (this has not been verified and may be wrong). The point cloud size is not necessarily a determining factor on the final quality (same for the high poly mesh size), what matters is how things look like once all the filters have been applied, and it turns out Zephyr can output sharp normal maps with a dense cloud having much fewer points than other solutions.
Regarding the 50% resolution path, one is in High Details mode (a Zephyr preset) which is limiting the output vertices to 10M. A second path is using a custom mode enabling up to 20M vertices and a smaller reprojection area (used to obtain sharper geometry). A third path has been made by disabling hyperplane matching, which should only be useful for thin geometry which is not the case here. In all cases, color map bakes have been made from vertex color and texture transfer. For the 100% resolution path, I used mesh reconstruction with High Details preset then other custom modes to see their effect on the normal map.
Zephyr features two ways to export meshes, we can either normally export the mesh in various formats, or we can Export & Enhance the mesh. This latter option was used to export the high poly meshes for this review. Zephyr does not generate very high poly count meshes internally (which is good to keep the project light-weight) but can densify a mesh at an export time, this way enabling to export a 164MP mesh matching the density of the other tested solutions. The enhancing process is associated with a filter working on the analysis of high frequencies and gradients of the original photos to create displacement. This creates very fine details which can be seen on the normal maps. The amount of such filtering is mentioned in the “Export+enhance” nodes of the graph.
In most paths presented above, a 2MP mesh is most of the time generated from the decimation of the high poly mesh. However, when using texturing, Zephyr actually generates a textured mesh for which we can specify the poly count. It is then possible to skip the separate decimation and just specify a lower poly count for the textured mesh. The latter can then be used in place of the 2MP mesh for subsequent actions. Using this method, relevant paths durations exposed in the graph may be reduced by 15 to 30m. The separate 2M mesh has been skipped for the path where hyperplane matching was disabled.
Let’s analyze the paths now. Through the 50% resolution path, processing speed is a relatively disparate function of the options chosen, 1-2h, 2-3h and 3-4h are represented. Disabling hyperplane matching offers interesting speed boost for the dense cloud generation. For the 100% resolution path, processing is slower mostly due to the dense cloud generation. For this photo set, the 100% resolution path is not producing a dense cloud whose size significantly differs from the 50% path, so this may not be an option to retain, unless it is providing different outputs, let’s see in the map analysis below.
Normal Maps – Zephyr
Row 1: In the 50% resolution size path, the normal map generated from mesh export & enhanced at 164MP with 10% enhance filter is rather smooth and homogeneous, densification of the mesh did not sharpen the initial geometry. Still, on the 50% resolution path, mesh extraction with custom settings produced a 40MP mesh which, exported & enhanced with a 15% filter show sharper large-scale relief, “new” medium scale relief and detailed small-scale geometry on the surface (due to the enhancing filter). Normal map for the no hyperplane matching case (based on a 30MP mesh) looks close to the 40MP mesh solution (note the vertex density of the two meshes are unrelated to hyperplane matching, I just chose two different settings there). It seems disabling hyperplane matching had no effect on the mesh geometry for this series, which is good as processing time was shorter.
Row 2: these test the 100% resolution path with High Details mesh extraction preset and 10%, 50% and 100% filter during the export & enhance phase. We can see here the effect of the enhance filter, which is not acting on large-scale geometry but adding fine details in a more or less intense way depending on the filter strength.
Row 3: these test various mesh extraction settings through the 100% resolution path. We can see a high max vertex count (case of the 60MP meshes) leads to both sharp large scale details, but also medium scale details. Whether the latter better reflect the actual geometry is unknown though as we are not using a reference.
From these experiments, it appears Zephyr offers a range of settings to finely configure the geometry output. Even from a dense cloud built from 50% resolution photos, it is able to produce sharp large scale details through mesh extraction using high vertex density and small scale details through mesh filtering. The increase of vertex density not only sharpened large scale details but also added medium scale details, which may be more or less desirable depending on expectations. Disabling hyperplane matching for the dense cloud generation offered a speed boost with no apparent effect on the final geometry. Even though configurability is high, I could not obtain a type of normal map similar to the RC one in High Details mode; both can be sharp but in a different way.
Color Maps – Zephyr
Color maps baked from the 50% or 100% resolution path look very similar when baked from vertex color, which is not surprising since both dense clouds have about the same amount of points. However, bakes from texture transfers look much sharper and better detailed than those from vertex colors.
Optimal Path – Zephyr
Overall the 100% resolution path did not bring any benefit over the 50% one. We can obtain a detailed normal map and sharp color map from the faster, 50% resolution path, in particular with hyperplane matching disabled: this is the path chosen as optimal along with texture transfer.
“Boulder_8” Series – PhotoScan
Like Zephyr, PhotoScan (PS) has a photo quality index tool enabling to determine photos that may need to be removed from the set. For these tests, all photos have been used.
Dense cloud has been generated with the High Quality (HQ – half size resolution photos) and Ultra High-Quality settings (UHQ – full-size resolution photos). Since the UHQ path had quite a long processing time, I also used the “max neighbors” (tweak setting not originally part of the interface) optimization at value 50 with the UHQ path so we have 3 main paths, HQ, UHQ, and UHQ opti.
The HQ path generates meshes for vertex color bake and texture transfers, two meshes for the latter to check if any difference in map sharpness as was done for other solutions. The UHQ/opti paths test texturing on a single 2MP mesh. The HQ path generated a 48MP mesh even though 164MP was selected at target poly count, so there is a maximum poly count here, probably a function of the dense point cloud size.
About processing speed, the HQ path fits into the 1-2 or 2-3 hours time range. However, the UHQ path has a much longer dense cloud generation time. This time is reduced in the UHQ opti path but still remains high. There may be ways to reduce processing time further trying other values for the max neighboring parameter, but I only checked the 50 value.
Normal Maps – PS
The normal map in HQ is quite smooth. UHQ and UHQ opti versions are sharper and more detailed, and they look visually identical, which means the optimization did not degrade the output.
Color Maps – PS
In the HQ path, the texture transfer bake produces a sharp and detailed output compared to the vertex color one. There seems to be no visible difference between the transfer bake from 20MP and 2MP mesh. On the UHQ/opti paths, transfer bake also produces a sharper result than vertex color, and the UHQ and opti paths seem to provide identical results. Provided texture transfer bakes look similar for every path, the choice of path will mostly be driven by the desired geometry restitution (as seen on the normal map).
Optimal Path – PS
Since the UHQ paths are quite time-consuming even in opti mode, the chosen optimal path here is the HQ one with transfer bake for the color map. By tweaking further the optimization parameter, it might be possible to get shorter durations in UHQ opti mode, this is left to be checked.
“Boulder_8” Series – Discussing Maps
Let’s put side by side the main maps for each solution and discuss them.
Every solution is capable of producing relatively sharp geometry. In the most detailed path, RC clearly differentiates flat and hilly regions. Zephyr also achieves sharpness but with the addition of other details, of medium and/or small scale (configurable). I could not only sharpen large scale details without the addition of medium scale details. PS can also produce detailed geometry in the UHQ paths but with more softness than other solutions. This can be useful for smooth surfaces in particular.
Bakes from texture transfers usually look sharper than those from vertex color, with maybe the exception of the RC High Details mode where vertex colors can be similar or a little more defined than the transfer bake. Two texture transfers are however standing out, those of Zephyr and PS which are both particularly sharp and detailed. The PS bake though may lack uniformity, some areas are very sharp and others quite fuzzy, the Zephyr version looks more constant. It is possible the blur check option enabled in Zephyr during texturing had an effect on this.
Test scenarios in this series and the next ones are mainly focused on the optimal paths determined earlier with a bit of testing around them. Also, maps are discussed altogether for all solutions.
RC tests for this series were done using the High Details mode. Processing path for RC lies within the 1-2 hours range.
Using Zephyr we are in the 2-3 hours range with the 50% photo resolution path. As with the “Boulder 8” series, the 100% resolution path did not provide a much different dense cloud but took longer to process, so doesn’t look optimal similarly to earlier observations.
PhotoScan has a specific 2.5D mesh reconstruction mode that can be used for surfaces. This mode reconstructs the top of the point cloud but not the sides, provided the latter is properly aligned with the bounding box. It is faster than 3D reconstruction and can be used with minimal or no inconvenience. As the graph shows, the 50% resolution path using 2.5D reconstruction is within the 30m-1h range. The 100% resolution path was processed much faster than with the previous series and could be used as an optimal path.
Overall this series was processed faster than “Boulder 8” for all tested solutions. There are fewer photos in this one, and also less depth. This time PS has some of the shortest processing times of all solutions, partly due to its 2.5D reconstruction mode.
The RC normal map in High Details exhibits the same type of features as with the “Boulder 8” series, sharp large-scale geometry and few small-scale details.
Zephyr normal map also shows sharp large-scale relief but lots of small-scale details. Since the two dense clouds were almost of the same size, there is not much difference between the two maps. A single type of mesh extraction has been used for this series, but as we saw earlier Zephyr has many parameters which can affect the detailed geometry, so the maps presented here are just a reduced set of possible outputs.
Regarding PS normal maps, the UHQ path provides sharper geometry than the HQ path which is similar to the previous series. On the HQ path, both 3D and 2.5D modes have been used and we can see the results are very close, the 2.5D based map showing a few sharper details in the contour areas. For the HQ 3D path, we can see a few vertical and horizontal small rectangular areas (about 2 to 4 pixels wide) on the normal map. I am not sure what is causing them (may or not be bake related). Those are not seen in the 2.5D paths though.
Color maps in this series look quite similar. A handy way to visually compare two maps is to stack them in two layers and hide/show the top one, this is what I did when maps looked very similar.
The RC color map bake from vertex color and texture transfer are very close, same observation as with the previous series.
For Zephyr, the two vertex color bakes do not differ much. The transferred texture bake, however, shows more contrast in particular in contour areas.
The two 50% resolution bakes for PS are quite equivalent and relatively smooth, the 100% resolution bake being sharper. The texture transfer bake is noticeably sharper than the two others, highlighting the interest of performing this texturing step. As with the normal map, we can see here too those small rectangular areas in 3D reconstruction modes but not in 2.5D ones.
If we now try to compare the texture transfer bakes of each solution, they look very close to each other in terms of sharpness.
To conclude on the “Rock 2” series, all solutions were able to produce relatively detailed geometry and sharp color map in a reasonable amount of time. This series had fewer photos than “Boulder 8”, less depth (surface) and was made to bake on a plane, also the quality of photos was higher. These factors may have helped in achieving more uniform results.
In RC I used the High Details mode and paths completed in the 1-2h range. Alignment had to be run two times to have all photos aligned.
For Zephyr, a single dense cloud and mesh were generated using what looked optimal considering previous observations. Paths are in the 2-3h range.
In PS the HQ 50% resolution path was used and ended in the 1-2h range.
RC has relatively sharp large-scale relief (ribs and macro structures) and low small-scale details. Zephyr bake with the chosen settings has a slightly less sharp large-scale relief but more details inside the structures. PS using the HQ path has smoother relief, however, the UHQ path would probably have provided more details.
Regarding RC, the bake from texture transfer looks a bit sharper than the one from vertex colors. This is likely due to the relatively low resolution of the high poly mesh (77MP) comparatively to earlier models (more than 160MP).
For Zephyr, the bake from texture transfer is sharper and more detailed than the one from vertex colors. This is consistent with earlier observations where the texture bake of this solution was especially sharp and detailed.
For PS, we also have a sharper bake for the texture transfer case, however, some areas are sharp and others not, which is similar to what was observed for the “Boulder 8” series.
If we consider all the color map bakes for this series, the Zephyr version seems to be the most detailed.
User Interfaces & Features
While not doing an exhaustive review of all the UI and features offered by each solution, I am highlighting here those which I found particularly relevant for my workflow.
General User Interface
RealityCapture has an especially handy and modern user interface, workflow based. The bounding box can be moved, sized and rotated with a single gizmo-type tool. External handles enable to resize the box without having to rotate the view.
Zephyr displays in red the side of the bounding box intersecting with the point cloud, this helps in scaling the box as close as possible from the object.
For main functions such as dense cloud generation, mesh extraction and texturing, Zephyr has a 3-levels wizard interface from which we can select presets, advanced settings and custom settings. All levels work on the same set of settings, they just display a different level of information, adjusting parameters in one level may affect the others. A typical usage is beginning with a preset then refine it in the Advanced level. Convenient, this approach is also an invitation to try new settings.
Every solution supports closing holes in the generated mesh, a useful feature that would take a long time to do manually otherwise. If this feature is optional in Zephyr and PS, it seems RC is doing it automatically and I didn’t find any way to disable it, which is sometimes undesirable for single-sided meshes or meshes with strongly concave areas.
All three solutions make the user progress along a workflow. Zephyr is especially detailed in this area, densification is handled separately and can be operated both during mesh generation and mesh export providing fine control over the final rendering.
Point Cloud Edition
It is sometimes useful to remove points from the dense point cloud before reconstructing the mesh. This helps in preventing unwanted geometry which would otherwise be more difficult to remove once the mesh is built. Both Zephyr and PS provide this functionality. However, since RC goes directly from sparse point cloud to mesh, there is no way to do this. Also, even though RC provides point selection tools, it doesn’t seem possible to remove points from the sparse point cloud.
All solutions provide duration estimates for the time-consuming processes. They are however not always reliable, as they adjust as processing is progressing. They also all provide a cancel button for long duration operations, but not always responding promptly (like on mesh export). All solutions have a pause/resume functionality which is convenient to temporarily free up resources during a long processing.
Photo Quality Rating
Both Zephyr and PhotoScan have a photo quality rating functionality which helps to triage photos especially when the set is large, and individual review of each photo would be tedious.
2.5D Mesh Reconstruction
As seen for the “Rock 2” series, the 2.5D reconstruction feature of PhotoScan can be used for surfaces and confers a significant speed boost to the reconstruction process making it a great solution for this type of object.
Merging Point Clouds
Even though not used during these tests, the ability to merge separate point clouds to form a single mesh is useful when an object cannot be reconstructed in a single pass (i.e top and undersides of an object). PhotoScan Standard provides this functionality. Zephyr has a project merging feature which can be equivalent but not in the Lite version tested here.
In terms of parameter tweaking, as was seen in the above testings Zephyr has a good range of options which can affect the output result. RC has a few but apparently not as much, and tests with dnf/ltnf modifications were not convincing. PS has very few visible options there, however, tweak parameters can be added in Preferences, but since they are not visible, they are hardly usable. Having lots of options is not necessarily useful to everyone, and it may be made to the detriment of simplicity; however not in the Zephyr case as the user is free to choose the level of details to go into.
As was stated in the opening of this article, very few objects were tested here and only a subset of the features of each solution was used. The evaluation was based on perception only, not numerical data. Also, software solutions constantly evolve, like Zephyr in its upcoming “Blueberry” version is said to improve both depth map generation quality and speed, among other things.
All reviewed solutions were able to deliver good results, in particular knowing the 8K resolution is a bit extreme. Downscaling maps to 4K or 2K provides sharpening opportunities that would tend to make maps look more alike.
All things considered, test analysis showed recurrent behaviors which can help better understand how each solution is processing data. After having performed this review, I find the solutions to be complementary, each of them having some strong or improvable areas, and sometimes unique features. I think If an “ideal” solution was to exist within the context of my workflow, it would probably have the speed of RC, a mix of geometry generation from RC and Zephyr, the configurability and versatility of Zephyr, a mix of texturing abilities of Zephyr and PS and features of PS such as 2.5D reconstruction and dense cloud merging.