@Tristan: I studied computergrafics for 5 years. I'm making 3D art now since about half a year fulltime, but I had some experience before that. Its hard to focus on one thing, it took me half a year to understand most of the vegetation creation pipelines. For speeding up your workflow maybe spend a bit time with the megascans library. Making 3D vegetation starts from going outside for photoscanns to profiling your assets. Start with one thing and master this. @Maxime: The difference between my technique and Z-passing on distant objects is quiet the same. (- the higher vertex count) I would start using this at about 10-15m+. In this inner radius you are using (mostly high) cascaded shadows, the less the shader complexety in this areas, the less the shader instructions. When I started this project, the polycount was a bit to high. Now I found the best balance between a "lowpoly" mesh and the less possible overdraw. The conclusion of this technique is easily using a slightly higher vertex count on the mesh for reducing the quad overdraw and shader complexity. In matters visual quality a "high poly" plant will allways look better than a blade of grass on a plane.
Is this not like gear VR or anything else
We’ve had the pleasure of talking with Marvin Washington about his work with environment design. In this article, he discusses the way he approaches the creation of a complex scene with the abundance of special lighting effects. Created with Quixel, 3ds Max and Zbrush.
My name is Marvin Washington and I’ve been a professional 3d artist for over 10 years. Most of my career I’ve been a 3d generalist working on various projects such as visualizing automotive accessories, 3d illustration, mobile apps and medical simulations. I’ve always had an interest in getting into games, a few years ago I started that journey and I’m currently a Vehicle artist at Turn 10 Studios were I help to insure the quality of the cars.
Balancing functionality and cool aesthetics can be difficult because each project will have a different balance based on the brief. Defining that balance at the very beginning of a project is key, I like to find a movie or game that’s close to the balance I need and check my design choices against that project. Not for style but for level of believability. To keep things functional and visually fresh I like to block out a shape for an object that I find visually appealing. Then focus on the construction of that shape or how to fill the shape’s volume. What materials is it made of, based on the materials, scale and form of the object what types of fasteners are used to hold it together? How are objects made of these types of materials manufactured and built? I also work in the opposite direction as well, If I see a real world object that I want to be a part of my design I’ll think about it’s real world function and how to makes the presence of the object logical. It may require something as simple as adding a decal or reworking other objects in the scene to fit into the function of the added real world object. Being honest with yourself in these choices is important you need to make the best choices in service to the design and not solely on what is easier or quicker.
I started modeling the scene as a whole, blocking in modular sections once most of the scene was blocked out I started refining the different modular sections and reusing models where possible. Most of the scene was modeled in 3ds Max. I lot of models in the scene began as splines because it was a fast way to get interesting shapes. Using splines allowed me to create interesting panels, pipes, railings and wall supports. I also used a lot of Quad Chamfer, I used Quad Chamfer as another type of modeling workflow much like sub divisional modeling. I would model a form with hard edges and use Quad Chamfer set to smoothing groups to get a smoother model yet retaining an easy to edit base mesh. There is also a fair bit of poly modeling holding everything together along with a few sub divisional models. The character was blocked out in 3ds Max then taken into Zbrush to refine the form and sculpt folds for the suit. Once I had a faceless character with a relaxed T pose I sent that back to 3ds Max. Back in max I shrank down some scene elements and poly modeled a rough back pack to start designing the tank. I also copied geo from the hands and feet to make gloves and boots. Once I finished modeling the character’s accessories I sent him back into Zbrush to create the several poses I would need for the planned images then sent the model back to max. A few items were taken into Zbrush to boolean with Dynamesh and retopologized. I would model the main object or blank model then model and merge subtractive and additive objects based on order of operation. The different models would then be exported as separate .objs. The exported .objs are imported into Zbrush were the Boolean operation take place using Dynamesh. The dynameshed models were remeshed and exported back to max. The scene was blocked out as whole but I would save out individual items to be refined in simpler scenes for faster viewport performance and renders. Those items would then be merged back into the larger scene. Since the block out models along with the scene as whole were modular in design merging items from the different scenes was really manageable. As models were refined I would reuse smaller elements from completed modular pieces to maintain a uniform look yet changing their placement in the unit as a whole to avoid too much repetition. I also would evaluate shape repetition in test renders and add variation when needed. Since most of the scene was created in max and Zbrush was only used for modeling and baking there weren’t any workflow issues using the two programs together.
I try to approach texturing in a basic way. I come up with a general color pallet that lends to the mood I’m going for and is somewhat logical for the design. Then I think about the building materials I’ve decided on in the design process. Next I consider coatings and finishes like paint or metal processing. For this project I used two UV mapping plugins for 3ds Max – Unwrella and UV-packer. Rather than unwrapping models as I created them I designed and modeled the scene fully and then unwrapped each modular piece separately. I would run each modular section through Unwrella then manually UV sections that I wanted unwrapped in a specific way. I then Used 3ds Max and Zbrush to create subdivided hi poly models for each model, the hi poly models were used to bake normal, curvature, AO and directional maps. Most of the masking was handled using Material IDs and Dynamask. A few custom scratch and scrape masks were made from photo textures.
I started by creating or modifying presets for the base materials I will need for the scene such as various painted metals, grunge and save presets for them. I did test renders to refine the presets and make sure they had the look I wanted. Next the models and various baked maps are imported into Quixel where I created a smart material using the material presets created earlier. Then various smart material preset are created with slight differences for walls, floors and core elements. Since most of the scene is very limited in color I needed to add visual interest to the surfaces. I used a subtle bump for cast metal pieces and a different less intense version for sheet, rolled or stamped metal. I also used layers of similar materials with slightly different diffuse colors and gloss levels with varying masks for each material layer. This creates a surface that reacts differently as light moves across it. There are also various layers of dust, dirt, rust, scratches and edge wear. All of that together gives the appearance of a complex surface not a single solid material.
To get the effect I was looking for I knew I would need to create most of the effect in the scene. I started by creating a texture sheet with a few lightning strikes using some brushes I found online. Then I UVed some planes to the lightning texture sheet. Those planes were placed appropriately in the scene and given an Arch and Design material with self-illumination. I also added a few lights to the center of the core to cast light, both the planes being placed in the scene and lights to illuminate the scene really grounded the effect to the scene.
The scene was rendered with Mental Ray so I used photometric lights with final gather and global illumination turned on. I try to approach lighting in a logical way. Meaning I place lights were there are sources of illumination. Then I will add and adjust lights to enhance the mood or effect I was going for. I started lighting the scene from the core and then worked my way into the main corridor. I also colored my lights to work on the mood early in the creation of the scene. Lighting was extremely important to this project. The mood is mostly defined by the lighting, there are some mood enhancing elements in the forms and composition but those elements couldn’t communicate the mood without correctly executed lightning. Lighting wasn’t only used to created mood but also to create layering and depth.
A major factor in rendering is gaining some understanding of the renderer settings, especially with interiors. Understanding the settings can reduces render times and allow for more iterative approach when evaluating lighting and materials. Understanding how the shaders in your renderer work. Figuring out what setting need to be changed to get the results you want and not just plugging in typical maps and hoping for the best. The shaders in your renderer control how light reacts to your surfaces, knowing how to uses these will add a level of believability to your scene unachievable any other way. Modeling to real world scale is important, a lot of offline renderers are setup to mimic natural light and having a scene that’s really huge or really tiny could result in lighting that looks off. It can cause the need to use really high or low intensity settings making it hard to trouble shoot problems or seek outside help. When I create a static scene versus an interactive scene which can be moved through I try to focus on the composition and framing. You can also make more use of post-production techniques. Making good decisions about what needs to be handled in the render and what can be done in post can save time, improve quality and open up possibilities that may not be practical to do in render. When making a scene that’s only going to be seen from specific views it’s a good idea to prioritize how much work you put into assets by their proximity to the camera or focal point. Time and effort spent modeling and texturing the backs of objects or areas occluded by other objects could be spent elsewhere.