@Tristan: I studied computergrafics for 5 years. I'm making 3D art now since about half a year fulltime, but I had some experience before that. Its hard to focus on one thing, it took me half a year to understand most of the vegetation creation pipelines. For speeding up your workflow maybe spend a bit time with the megascans library. Making 3D vegetation starts from going outside for photoscanns to profiling your assets. Start with one thing and master this. @Maxime: The difference between my technique and Z-passing on distant objects is quiet the same. (- the higher vertex count) I would start using this at about 10-15m+. In this inner radius you are using (mostly high) cascaded shadows, the less the shader complexety in this areas, the less the shader instructions. When I started this project, the polycount was a bit to high. Now I found the best balance between a "lowpoly" mesh and the less possible overdraw. The conclusion of this technique is easily using a slightly higher vertex count on the mesh for reducing the quad overdraw and shader complexity. In matters visual quality a "high poly" plant will allways look better than a blade of grass on a plane.
Is this not like gear VR or anything else
The developers from Mapbox showed how you can use geo-data with Unity to create fantastic and accurate virtual spaces, based on real-world locations.
My name is Greg Lemon, and I am a Unity developer at Mapbox. Prior to this role, I was a lead gameplay engineer at a small indie game company called Funomena, where I worked on an award-winning, interactive VR fairy-tale called Luna. Prior to that, I’ve worked in a wide variety of roles spanning art, design and engineering. I was a big data engineer at AT&T, a lead character animator on Star Trek: Online, a character TD on the film XMen2, and an art director on several iOS titles. I also teach! I’ve created syllabi and taught classes at California College of the Arts, San Francisco Art Institute, and the Savannah College of Art and Design.
At Mapbox, I get to work on a wide variety of awesome projects, from helping to design and build out the SDK, to creating demo projects that showcase our initiatives in map construction, feature visualization, and AR.
I’m Siyu, the Engineering Manager of the Unity team at Mapbox. I support our team as we build out the best mapping tools for Unity. Prior to starting the Unity team, I was on the Satellite team at Mapbox, working on our imagery processing pipeline. In a past life, I was a visual effects artist at Lucasfilm and Dreamworks animation working on live action visual effects and CG animated films such as Pacific Rim and Rango.
We gather data from a variety of sources, including OpenStreetMaps, satellites, anonymous sensor data, government-provided datasets, and third-party commercial data. As a developer focused platform, Mapbox also allows developers to upload their own data with flexibility and access level controls. Any data available on the Mapbox platform is available in the Maps SDK for Unity. This means being able to use Mapbox maps as well as any custom data developers might have.
The Maps SDK for Unity takes care of the hard part of bringing the map features into Unity as meshes and game objects that developers can use to add custom gameplay behavior, replace map meshes, or change the rendering style of a particular map feature.
The Maps SDK for Unity
The Maps SDK for Unity streams data into your game at runtime. So this means map tiles are requested from Mapbox’s APIs at runtime. This has the benefit of making sure you have the most up to date map at all times as well as making your game scalable while being small. You only request the tiles when you need it.
We’ve taken steps to create a robust style editing pipeline that wraps some of the best styling features found in Mapbox Studio into a Unity-relevant workflow. Deciding how a map looks begins with selecting how a map data will be modeled in the world. We offer several types of feature extrusion and visualization options that allow users to customize various qualities of the resulting 3d map feature meshes.
Once mesh construction options have been decided, users can select how various map features will be shaded and textured. For buildings, users can select from one of several pre-packaged styles for quick and easy out-of-the-box look development. Users can also create their own styles, which are Unity scriptable objects containing links for materials and UV layout settings. Additionally, we include a few special Mapbox materials which allow for procedural map feature colorization; users can assign mask textures to these materials and use scriptable palette objects to assign colors at runtime.
Styles can be assigned and edited on a feature by feature basis. Buildings, roads, waterways, parks, or any other type of vector data feature can have its own unique style, which allows for a significant amount of user customization in an easy to use UX.
Finally, styles from Mapbox studio can be used in rendering raster data, which is used in the ground geometry. By combining styles from Mapbox studio with the ability to customize the look and feel of individual 3D map features, develops can create incredibly diverse and unique looks for their Maps SDK for Unity projects.
In the example, voxel example, we take a combination of our land use data and global elevation data. At a high level the voxel world is created by first creating a high contrast raster map based on land use data we have, and then each pixel is queried to determine which type of voxel to generate at that pixel. We use our global terrain data to determine how many voxels to stack at a given location, and this is driven by the real world elevation at that location. It’s a great example of how you can push map data to create a stylized environment that’s almost unrecognizable as a traditional map.
In urban areas, we focus on leveraging our mapbox-streets-v7 vector tiles which stores buildings and streets as points, lines, and polygons. The Maps SDK streams in the vector tiles and creates the meshes that constitute the 3D maps that you see.
In more natural landscapes, we use our global elevation layer to create 3D meshes of terrain coverage. Then we take our satellite imagery and drape that over the 3D terrain meshes as textures. Both these pathways to generating a map should be feasible with a few clicks after importing the Maps SDK into your project.
If we were to use some texturing trim sheets and other elements like lighting sources or procedural vegetation, could we use this data in combination with your geo?
This is where the power of and flexibility of the Maps SDK really shines since the map objects are all generated as game objects with meshes, you can use any tools or data that is available in Unity to really customize your environment. All the elements of the map are compatible with most 3D game development workflows.
What are the advantages of using this content for games?
One of the main efficiencies with procedurally generated worlds such as this one is that it allows you to scale quickly and efficiently. By designing a procedurally, you create a set of rules to render an environment. Doing this once in the Maps SDK allows you to apply your environment style to Mapbox’s entire global map. This allows small teams to develop games the size of the entire world.
What is your plan for the next iteration of the software?
Right now we’re focused on making the Mapbox Maps SDK for Unity as functional and flexible as possible. We’re working with game studios to really understand the tools needed to make the next generation of location-based games. For the time being we’re focused on Unity, however, the map data that enables all of this is available through a public web API so that it’s available to developers on any platform.