Hi Lincoln, Thanks for this. I found it incredibly informative. Could I ask you a question about your wind + plant movement? Is there any way to stop it looking like the plants are rooted in moving water. I find it horribly distracting and pulls me out of my suspension of disbelief. Cheers, Tudor
Our expert guides dependably work in a state of harmony with the necessities given to us, and this makes our task arrangement a perfect one. We provide best Online Assignment Help . Allassignmenthelp deals with all the contextual investigations and assignments relating to aces. https://www.allassignmenthelp.com/
Lovely work ! You mentioned "When lighting the scene, I used Light Functions to create the illusion of light passing through clouds, thus lighting the environment unevenly" do you think you could show what is the setup to get such a precise result ?(meaning highlight the area you want?)
Thomas Kole talked about his amazing new tech, which allows capturing low poly copies of complex 3d elements with techniques, similar to photogrammetry.
Daan Niphuis is an Engine Programmer, and I’m a Technical Artist intern, though this is my last week at the company. Force Field VR is a studio in Amsterdam that came from Vanguard, known for the Halo Spartan Assault and Spartan Strike games.
Under Force Field, they produced Landfall and Terminal, for the Oculus Rift and Gear VR respectively. The system we developed looks a lot like Photogrammetry or Photoscanning, but cuts some tricky steps. The scenes you see here are from Unreal Tournament.
Generating Low Poly Models of Large Environments
When you’re making an open-word game, or a game with large-scale levels in general, showing large scale structures in the distance can often be a challenge. Usually the process involves making a manual low-poly version of the distant mesh, and somehow bake the textures to that. This system could automate that process.
The process is not 100% automated, but it requires very little manual steps. Essentially, and artist would set up numerous camera positions from where he’d like to capture the environment, and executes a function that starts the capturing.
He then takes the data into Meshlab, processes it, and puts it back into Unreal. Depending on the size and complexity of the scene, the process should not take more than 1 hour.
Photogrammetry works by comparing many photos, and looking for similarities. With these similarities, it can reconstruct a sparse 3D point cloud. Now it can look for even more similarities and reconstruct a dense 3D point cloud. We can skip this step, because we can extract this information per photo directly from UE4. We capture the environment from a bunch of locations, from all directions. 4 times in total per direction. This way we capture base colors, normals and world positions, which we compose into one big point cloud. We also capture a high resolution screenshot from that point, which we use to project the textures from at the end of the process.
With this Point Cloud we generate a new mesh within Meshlab. This mesh has the same shape and contour as the environment, but it’s very high-poly. This mesh is then reduces, unwrapped, and receives textures for the final model.
UV unwrapping, sometimes called Mesh Parameterization, is always tricky, and took a large chunk of the research time. Initially, I wanted to do that process entirely in Meshlab, but it did not produce good enough results. UV’s should have large chunks, little stretching, and no overlap. Three criteria which are always conflicting. I found that Maya’s automatic unwrapping, together with their packing, works pretty good. There’s a plugin for Blender called Auto Seams Unwrap, which also produces even better patches, but it can take a long time to compute (sometimes over half an hour for a very complicated mesh). This process could be automated further by automating it with a script.
In this case, we capture the final color of the scene – with lighting and all. This means that the final model can be used with an unlit shader, which is very cheap. But that does mean that all dynamic lighting is lost.
However, the system could be modified to capture Base Colors, Normals and Roughness (optional) Instead, for dynamically lit scenes.
Small lights in the environment could even be baked to an emissive texture for additional detail.
Of course there’s a huge loss in geometry detail when you run an environment through this pipeline. However, the final polycount is in your hands.
Everything is baked into one final texture, so small details in texture are also lost. High frequency details such as wires, chain link fences, and thin meshes can be problematic too.
For the captures of Unreal Tournament, I tried to take those out.
The one thing that the process does preserve very well is contour and shape, which is perfect for distant geometry.
There’s all sorts of uses for this technology. The most useful one would be using it for distant geometry.
But you could also use it for marketing purposes, uploading 3D action scenes of a game to a site like Sketchfab.
If you want to read some more, you can read my article on my portfolio.