Pretty good post. Thanks to blog author for the wonderful and informative post You may also read the website - http://www.coloradoloansnearme.com
A servo stabilizer is a servo motor controlled stabilization systems that performs optimum voltage supply using a buck/boost transformer boost that captures voltage fluctuations from input and regulates current to the correct output. For more informetion us : https://www.servostabilizer.org.in/what-is-servo-stabilizer
Naughty Dog’s Gabe Betancourt talked about the way to approach light in the production of video games.
My name is Gabe Betancourt, Lighting Artist at Naughty Dog, and have been working in entertainment for over 15 years. I was born and raised in Miami, FL. Graduated from International Fine Arts College (IFAC) in 2003, acquired by Art Institutes, now Miami International University of Art &Design. Before Naughty Dog, I worked at Activison/Treyach on Call of Duty Black Ops and Call of Duty World At War, for single player, multiplayer, and Nazi Zombie campaigns. My first full game dev opportunity happened at Crystal Dynamics on Tomb Raider Underworld, ironically around Uncharted’s first game, Drake’s Fortune in 2007. Before that I worked in visual effects for TV, film, music videos, and game cinematics.
While working on an indie project a year after Black Ops, I saw a demo for The Last of Us at E3, and loved it! Being a longtime fan of Naughty Dog, I wondered if it was possible to be a part of the team and applied not thinking it would happen. Getting hired, I felt very lucky and was really grateful it worked out.
Working as a Lighting Artist
Along with light placement, we coordinate color and mood via sun, sky, shadow, and exposure to go along with story and gameplay progression. By tweaking atmospheric values, post process effects, and rendering source illumination, we achieve making it all work well together in realtime. We’ll blend elements to get the right feel for day or night afterwards, using fog, Godrays, dust, glows, reflections, specular highlights, shadow, fires, flares, and other artificial sources as well as lightmaps, gobos, and LUTs. We tunevalue direction (what we call ‘directionality’) and curate their harmony with materials, characters and environments to achieve a grand sense of depth and scale, making specific details pop out. We emphasize faithfulness to style or photorealism; whatever the game’s art direction calls for on a moment-to-moment basis. Lighting artists are often equivalent to Directors of Photography (DP’s) in film.
We work very closely with material and environment teams, character artists, game designers, concept artists, engineers, FX teams, and the art director to make sure mood fits with story and gameplay. We discuss often back and forth areas we get stuck in and vice-versa. Sometimes a material or set design works against ideal lighting so we’ll ask to have object placement moved, textures brightened, palettes changed, walls shifted, or ceilings broken to achieve the right look. We’ll trade creative feedback on difficult details to polish. Most lighters lend input also coming from secondary skills either through illustration, graphic design, environment modeling, or photography, and bring each aspect of those crafts alongside it. We work with most departments, but not all at once. As long as we have a general direction of a scene’s needs via concept or description, we pretty much run with it autonomously and check-in with each department at different stages to make sure their focused efforts work in tandem with ours. If we veer off , each department lets us know, but for the most part, due to just the sheer amount of time we’ve accumulated collaborating, things tend to fall in place on their own and sometimes by surprise, cool things happen we don’t expect.
When exactly do you get on into the production?
Usually towards the mid-point of pre-production, when designers finish gameplay roughs with block mesh and story direction is established. We’ll start roughing in sun, sky, runtime lights, fog, and bounce to see how well it works together. If it fits with progress, sells vision, and feels engaging, we run with it. After, we spot-fix areas too bright or dark for gameplay and any challenges that occur during QA testing. We’re involved a bit early in the process but ramp up into full immersion towards the middle and end, after most departments have completed the bulk of their work.
Sometimes we start with block mesh but most times we use roughed-in environments with detailing. Some argue you can’t work with block mesh, it’s too simple, too early, and to some extent it’s true. But with time, I’ve noticed a bigger payoff roughing in the gist of lighting for gameplay early on. Right away you get an idea of trouble areas, where transitions occur, and also helps inspire the team a little. Other departments may get a better sense of where to put more or less details and what colors may work better together with light. One critical area that helps a lot is to have a hue pass. If the colors of basic cubes closely resemble the base color for final environment (dark greens, muted browns, bright yellows, cool greys) instead simple primaries (red, greed, blue), and accents are placed where needed, color schemes from combined departments tend to fall into place much easier.
What do you usually work with as the lighting artist?
Mostly Maya and proprietary tools. A lot of what’s used is developed in-house.
What are the peculiarities of lighting game environments?
Artistically, to provoke. When a player walks into a space, does lighting urge a sense of presence? Does it excite wonder, dread, joy, anger, reverence, peace, or sadness? A lot goes into capturing a feeling that takes being in touch with one’s inner sense and watching how others react to get right. It’s the most difficult challenge to accomplish. Second to that, avoiding flat shapes. Lighting builds depth. A prudently lit area has a sense of focused direction (what we call directionality) but also conveys volume by shape, silhouetting values between foreground, midground, and background geometry. Technically, we wrestle with lightmap UV artifacting, resolution, UV space allocation, memory constraints, and areas too bright or too dark for gameplay. Bakes can produce artifacts and splotches we don’t plan for and we investigate the cause. Some lights will or won’t appear as intended or colors won’t look right even though we used the right source texture (like with skies). We try to push values as far as we can to get the most dramatic result, but if it’s interfering with gameplay we scale back. Sometimes we exaggerate to lead players around an area or make enemies more visible during combat. Foreground elements will not always blend well with background and we figure out workarounds for that.
The Challenges Behind Game Environments
The greatest challenge is also its highest benefit, it’s all dynamic at runtime. The good news is that we can iterate changes quickly if it’s not dependent on baking and we can compare different looks and presets on the fly. The downside of dynamic editing is with a team of 200+ folks working on the same thing. One day everything can look amazing, only to find it all go awry the next morning and not know why. At times it takes some detective work to realize person A asked person B a favor not realizing it would affect person C. So communication becomes very important. We sometimes renegotiate a given direction as a result of edits coming from different directions. It’s a double edged sword, you get to collaborate with others to get an amazing result and be part of a team but you also have to get out of a comfort zone and let go of some control for the benefit of a bigger picture.
Generally lightmaps take advantage of quality from pre-processed rendering such path tracing or ray-tracing and global illumination, to provide bounce, shadows, and occlusion with detailed precision. Otherwise, a game with all those features at runtime would run very slow if at all and make the game unplayable. Or it might work well with large scale environments but fall short of quality up-close. Raytracing is expensive. Baking frees up CPU and GPU bandwidth for more things like AI, polygon vertices, physics, particles, etc. Tech is improving and runtime GI is emerging in popularity as well as techniques for dynamically refreshing lightmaps. It’s hard to say if all games benefit from lightmaps. Some open-world, hub based games fare better without because scale is too massive for practical purposes. In our case it’s ideal for cinematic quality and adventure themed projects.
Generally it starts with appraising what’s beneficial to lightmap. Sometimes baking into polygon vertices can yield a good result and save lightmap space. One rule of thumb I like to use is to measure geo against the hero. Anything larger than the main character benefits most from lightmaps, anything smaller is better for vertex baking. It helps because we often judge scale in relation to character size and our perception of detail adjusts to compensate for objects looming over the hero. Exceptions include round or long objects such as columns or door frames. A bookshelf is likely to look better lightmapped, but if it’s at knee height, it might not matter as much or look better if vert baked. Next we try to make sure UV layouts are clean. It’s vital to get the most out of texture space which can quickly take up memory. Then we appraise whether the resolution of lightmaps justify the amount of screenspace it takes up. Is it worth the scale it uses? If too dense we shrink, if too low we scale. A mountain in the background gets decreased compared to a cave in the foreground since the player might never approach it. Then, what isn’t manually UV’d gets auto arranged. We look at the result and from there, iterate art.
Indoor and Outdoor
Every scenario is different. For outdoors the camera can go almost anywhere, so there’s a lot of ground to cover to make everything look good. It can take a while and challenge one’s ability to make a close up detail or large vista wide have equal quality and depth. It’s difficult to achieve both simultaneously. We look at time of day and weather for opportunities of grandeur; sunny days allow for godrays, heavy rain gives us dense fog and bright reflections, and night time allows us to use fire. Some areas don’t work well and we end up cheating a bit with runtime lights, cloud shadows, or particles. Interiors are different, a main source might not be available for good lighting or the camera’s limited reach can make it harder to show the environment. To remedy that, we try to figure out if there’s a source we can improvise, and make it hit a target area with bounce. Sometimes we’ll decide to use electricity, fire, flares, or player flashlight.
Without GI our games would feel very flat and look unnatural. Calculating bounce rays from their source to every object accounts for a lot of what we see in nature with real light. By contrast if we had to do it all by hand, it would take much longer and so many unaccounted nuances are lost. We avoid losing range and colors in unexpected areas. DP’s in film have natural light to work with so they concentrate on style, mood, and direction; likewise GI helps us with a natural starting point that allows us to focus on art. Even if it’s not perfect, an approximation gives us an ideal starting point that otherwise takes much longer to achieve without. Sometimes beautiful caustic effects can happen by surprise, but the reverse is true, we’ll get artifacts that exaggerate what we’re going for and may require black bounce cards (blockers) or bounce reduction to make it appear natural.
How do you make light work with gameplay?
We look at intended paths, where to guide or distract players and iterate on anything that may discourage or encourage advancement. Also assisting combat, making sure enemies and cover in an area remain distinct. Sometimes a light can entice players to pay attention to an object, location, or help with puzzle. Some of the most involved is with light-based puzzles. We’ll work closely with designers to get runtime lights working well with mechanics.
We like to push for darks when we can, but it’s important to give a sense of it rather than taking it literal. Players often prefer to see their surroundings in detail, lose themselves in their environment. The ideal middle ground has been to go pitch black in shadows when possible like in torch or flash light levels, small rooms, shallow caves, and with cut scenes. But you don’t need black to give the feeling you’re in a dark place. In real life, your eyes adjust to low light. You can say we’re re-producing that sense. In film, style comes from clipping darks, we’ll do the same if we can get away with it, fake it with vignetting, or something like that. But not all TV’s display full range and sometimes testers brighten their monitors, so we take that into account. We also take efforts to have our screens professionally calibrated, observe waveform charts, color histograms, and test images to make sure we have the intended range. It’s a preference that players luckily have an option for, to set global brightness but we prefer the art showcased at its default setting. We make an effort to balance both.
With destructibles, an ideal approach might be to look for interesting shadow shapes, detailed godrays, or high variance in brightness and values. For interiors, we may ask environment artists to add objects or polish areas of focus closest to the light source. With exteriors, we approach it from the suns’ point of view, interacting with rays and fog. Sometimes we’ll want to add particles near light that mix with dust and wind direction to highlight intricate details.
It takes a lot of iteration, plus collaboration with engineers and technical directors to figure out tradeoffs, what to keep, and let go of for the highest return on investment. We push it as far as we can. Interestingly enough, when we think we’ve done all there is, there’s always one more thing until the last minute. Multiply minor optimizations times a lot of little features and it adds up to one major savings on the final cost of frame rate.
What is the best and most efficient way to light the scene?
Every artist has their own take on that. For me, less is more. The key light is master. Every other light, element, and detail should support the key and allow it to stand out as much as possible. If you can do that, you’ll likely have good, strong, lighting direction and composition. Working with the key as much as possible, not taking it for granted whether it’s a spot light or sun, setting it’s angle, color, intensity, direction, shadow quality and pushing for interesting shadow shapes to get the most out of it. I can’t emphasize it enough. If you can keep at it until it has what you’re looking for, 80% of the work is laid-out. The rest is adding only what’s needed when needed, placing fills and bounce to compliment the key. The ideal light rig accomplishes with a handful of lights what would feel like dozens. Often too many lights being used in a scene results in muddy, confusing direction. When this happens, I recommend deleting all lights and starting again with the key.
If at all possible, find a really good reference from a photograph or a film that captures the overall color or mood you’re going for and try to match it with just one light. While it sounds oversimplified, if you can get a strong base for that end result as much as possible, it goes a long way. And like previously mentioned, only after exhausting the most you can with one, then add another.
Lastly, I’d like to give a big special thanks to the team, for their collaboration, help, and support, which made it possible to achieve our goal and make it to the finish line!