I have the utmost respect for each of these developers. I must say I think they’re mostly incorrect in their assessments of why the Dreamcast failed. The Dreamcast’s ultimate failure had so little to do with the way Sega handled the Dreamcast. Sega and their third party affiliates such as Namco and Capcom put out so many games of such stellar quality, that the Dreamcast won over a generation of gamers who had previously been diehard Nintendo or Sony fans. They even won me over, who had been a diehard Sega fan since the SMS days, but was so disillusioned by the Saturn’s handling that I had initially decided to sit the Dreamcast out. At that time, the Dreamcast launch was widely considered to be the strongest console launch in US history. In my opinion, the three issues leading to the fall of the Dreamcast were (in inverse order):1)piracy, 2)Sega’s great deficit of finances and cachet following the Saturn debacle, and 3)Sony’s masterful marketing of the PlayStation 2. Piracy’s effect on Dreamcast sales is a hotly debated topic, but I’ll say that the turn of the millennium, most college and post-college guys I knew pirated every bit of music or software they could. Regarding the Saturn debacle, the infighting between SOA and SOJ is well known, as are the number of hubristic decisions Mr. Nakayama made which left Sega in huge financial deficit. They were also directly responsible for erasing a lot of the respect and good will Sega had chiseled out worldwide during the Mega Drive/Genesis era. With the Dreamcast, Sega was digging itself out of a hole. They had seemingly done it as well, and would have surely continued along that path, had it not been for the PS2. There is no doubt in my mind that the overwhelming reason the Dreamcast failed was because of the PS2.
Great stuff Fran!
What the hell are you saying? I can't make sense of it.
Lars Pfeffer did a breakdown of his Observatory scenes (Swamp Observatory & Fantasy Observatory) crafted from the same assets in UE4. He shared thoughts on blockout, modeling, lighting, water surface material, and more.
Hello everyone, my name is Lars Pfeffer, and I like to build worlds! My journey essentially got kicked off back in 2006 when a friend’s dad presented a homemade star wars video to a 10-year-old me. I’ve been interested in 3D and computers in general for quite some time and was blown away by the effects and 3D scenes. When I asked him how he’d done it, his answer was just “Blender”. Even though the beginning was rough (little to no internet, bad English skills, and Blender being in its pre 2.5 days), I’ve been dabbling around with it ever since, slowly but steadily trying to piece together “how 3D works”. The artsy side of things got boosted during my media design apprenticeship.
Over time, my main interests became sort of everything that has to do with 3D, the world building in particular, and around one year ago I decided that I really wanted to do that full-time. So I saved up some money, quit my job and have been learning environment art from then on until now! Right now, I’m focusing on learning Maya and ZBrush. And after that, I’ll hopefully be good enough to get a job!
I did the swamp observatory first, then reused the same assets in the floating observatory and started free-styling without any particular concept in mind except for the thought “let’s make it floaty and in crazy colors”. Essentially, I wanted to give the observatory a better presentation, since I work a lot on the first scene but the effort was wasted due to the poor presentation. Here are the two finished scenes:
Everything started with this image that I made when trying to make concept art. It’s not the prettiest in the world, but I always wanted to make a real-time 3D environment with my own assets before, so making one based on that seemed like a good fit.
Nature scenes and organic models, in general, are usually stuff that I try to stay away from the furthest. But since one should always try to improve, I thought “well, gotta try that at one point either way, so why not now”.
Building the 3D scene started with a rough blockout to get the overall size relations and composition right, as well as getting a rough estimate of how many assets I would need and which would be present the most.
To ensure that the final scene matches up with the composition of the concept, I’ve used it as a background image for the camera and started playing around with the height, perspective and rotation of the camera in relation to a flat plane. Once the horizon lined up, I could start extruding the terrain and placing basic shapes. Of course, when working in just screen-space, everything is placed weirdly in the 3D space, so getting things to look right and having correct scales in 3D requires some back and forth between camera and perspective view, moving things around, resizing, checking back with the concept and so on.
What was really handy here was the walk navigation option in Blender. At any time, I could just hit Shift+F and start flying through the scene with the common first person controls found in games. Gravity for walking can be activated with TAB, and since I had the view height set to 1,7m in the user preferences, I could also easily preview if things were too small, big, high or low when seeing through the player’s eyes.
Once the blockout was done, I exported everything as one big .fbx file and imported it into UE4. (A big thanks to the Blender’s .fbx exporter’s presets, the scale and the axis were all lining up with the default settings! Yay!)
What I would’ve done differently, now that I know it:
Based on my experience with this project I would recommend starting replacing the blockout meshes by correctly categorized and named assets as soon as possible. I’ve made the mistake to jump straight into each asset and detailing it, moving to the next one and so on, which resulted in the scene filling out quite slowly and having to go back to redo the detailing quite some times, because it turned out later that the assets weren’t playing well together visually.
To avoid this, I would recommend doing the blockout as usual, importing it, testing some lighting and maybe some effects, but immediately after that start preparing the asset folder structure, splitting the blockout into the major assets with correct filenames, and importing them correctly into the game engine as soon as possible. This way, the organizational part is taken care of at the beginning when things are simpler to keep track of, before becoming overwhelming. This improves motivation during production due to less confusion on how assets should be named, where they should be put and how many variations you’ll need, and it also reduces overall visual noise in the scene.
The rest then was quite straightforward. For most of the assets, I would take the low poly and either use that as a base to sculpt high polys in ZBrush or make the high polys with standard Box-Sub-D modeling in Blender. Then retopoing them in Blender or ZBrush, or just refining the box modeled low poly’s, creating a cage mesh and exporting them for Substance Painter. I usually try to bake exclusively in Painter for the convenience.
A word on file management:
I think I found a good workflow for file management when working with the Unreal Engine now. Basically, it turned out to give me the least hassle to have one source file folder that holds a systemized naming and ordering convention, with all the source files, including ZBrush and Substance files, and then have the same structure inside the UE4 content folder, but with just the final low poly’s and textures. This ensures that work in progress and source files that are not needed for the final assets won’t interfere with UE4, which can be really annoying.
Also, setting up Painter to export textures directly to the UE4 asset folder instead of the source files folder was quite a time saver, as UE4 constantly monitors its content folders for changes and applies them immediately to the materials.
Modeling the Observatory
Honestly, there is really nothing special modeling-wise regarding the observatory. Just low poly, high poly, Substance, and that’s it. No weighted normals, no complex procedural in-engine materials, no crazy material blending setup.
However, even though I did not plan to model the inside of the observatory, it turned out to be a necessary step to get the proportions and positioning of the windows right. I also had the telescope placed in quite an odd way initially, and having to at least block out the interior helped in increasing the believability, even when viewing from the outside. (It also meant that you’d have something to see through the windows, so that’s another plus.)
The roof was also somewhat interesting: texture repetition was quite obvious on the roof segments, so I made 3 material variations for those inside Painter that could be quickly swapped out in the engine to break repetition.
Due to the asset flip nature of the second project, I actually did not make any changes to the rock assets at all. I was just very fortunate to be able to cover up the shortcomings by the different composition and lighting. This just goes to show how important these two components are! They can really make or break a scene.
I was looking through some references on the web and came across Victorian entrances. Those looked quite cool, so I decided to make one similar to that style.
However, just having a plain old door turned out to be quite boring. Then I thought of film noir and detective movies, and how they always had cool looking doors with their milky glass with text on it and everything and figured it would be an interesting case to test my glass material on it.
A little side note here:
I wanted to standardize the texturing and material process as much as possible to streamline the workflow, so I made a couple of master materials that could then be instanced and having the textures swapped in and out quickly. This was especially important for me as I used the UE4 preset when exporting textures from Substance. This preset packs roughness, AO, and metalness inside the single color channels of a single texture. Great for keeping things organized, but bad if I would have to remember and connect the single channels every time I want to make a new material. I’m way too lazy for that.
Having everything already wired up in a general purpose kind of way was really convenient. My current collection of master materials include PBR, PBR alpha, PBR alpha plant (with SSS), and PBR glass (can also be used for water, but the swamp water had so much going on that it needed its own material).
Because I did not want to create extra material to do the milky-glass, I attempted to fake the roughness by using a lot of small bumps. I made those bumps inside Painter and slapped the logo I made in illustrator on it, exported everything, and the rest was just changing the IOR inside the material instance. The transparency could also be controlled this way with a small tweak to the material, but I preferred to do that inside painter to keep the pipeline as simple as possible. One of the great things about using master materials and instances is that once set up, parameters update immediately, without any downtime from re-compiling the materials and whatnot, which was quite nice for finding the sweet spot for transparency and IOR.
In the end, it turned out to be more like hammered glass instead of milky glass, but I liked it, so it stayed.
Struggles with Texturing
In the blog, I’ve mentioned that I struggled with texturing the observatory. Fortunately, I was able to keep the assets separate, as it was mainly the main body that was messing up the texturing. The pillars and windows got individual textures, and for the main body, I just used the base sandstone material from Substance Source that I used in Painter before.
Lighting & Colors
The inspiration for lighting definitely came from the Spyro Remastered Trilogy that I was playing at that time, and I wanted to make something with that intense stylized colored lighting as well.
When talking about lighting, I think it’s better when to switch back to the swamp observatory for this one, as this scene is easier to demonstrate the basic idea behind the lighting process I’ve been using for both of these environments.
I fought with the lighting quite a bunch because it was kind of the first time I was approaching this topic for real. I had a ton of issues with the lighting being too bright, too dark, too saturated, too mushy and so on. Eventually, I found out that I was just thinking about the light, but not the shadows, which flatted everything.
(Unfortunately, I haven’t made a screenshot of that, but just imagine bad Omni lighting that makes everything look like a PS2 game.)
I also found out that a physically accurate approach would not get me anywhere, because my concept art turned out to be physically inaccurate, to begin with: With the sun shining from behind, the observatory would not be brightly lit on the front like that, but never the less, that was the concept, and it worked there, so I figured I should try to stick to it and make it work somehow.
And after some “realistic” approaches, I was like “Why should I restrain myself to do it physically correct when all I want is to recreate the mood of the concept in 3D? Heck, even in the movie industry they fake with artificial light all the time”, so I came up with this process:
First, I removed every light, leaving the scene pitch black, including the lighting coming from the sky sphere. However, the skydome already looked the way I wanted, and I didn’t want to change that. It’s the canvas of the scene and the basis of all the rest in the image, so that needs to look right, otherwise, everything else won’t work.
After that, I added a skylight independent from the colors and intensity of the Skydome and played with the values. The goal was to light the scene just enough so that one would be able to see stuff, but then keeping it darker than one would expect, because:
I then started faking big time by adding point lights like crazy to light everything exactly the way I wanted it to look. It’s physically inaccurate as heck, but if it looks good, so what. The low-intensity skylight helps to bind everything together, without bringing out objects that I don’t want to stick out.
Side note: That would not work with the dynamic time of day or more open-ended environments, but luckily, that was none of my business for this project. When doing a dynamic scene, I’d suggest blocking out and rendering the scene under various different lighting conditions first, and then using those renders as a base to do concept painting.
I proceeded to add point lights to fake bounce lighting and light bleed into the rest of the scene. And don’t forget to bake from time to time to get an accurate representation of how things will look in the end! After that, it was just a matter of fine-tuning the colors, temperatures, and intensities to get to an overall pleasing image.
One of my goals was to get as close to the final result as possible without using Post-FX, except bloom and AO. I’d rather have a scene working without Post-FX rather than being dependent on it to work. The Post-FX should just be icing on the cake, at least I think so.
LUT grading and vignette followed as the last polishing step to make some of the colors pop more and to make the image look more consistent overall. Although when being honest, I’ve noticed that the swamp observatory scene really has problems on some screens. It looked fine on mine, but bad on my tablet, so yeah, lesson learned: never color grade with badly calibrated screens, and also not when having f.lux turned on. Yeah, I know, should’ve been obvious, but I mostly worked on those scenes at night and didn’t think about it. Don’t make the same mistake!
Water Surface Material
The shader and rendering steps were actually smooth for most of the time because of the master/instance material setup. There were some minor hiccups along the way, but the biggest hurdle was definitely the water surface of the lake.
The water surface material:
I didn’t want to use just screen space reflections for the water, because I knew the reflections were almost half of what made this scene look interesting, so half-assing on that front wasn’t an option. And reflection probes weren’t getting the job done either.
Setting up planar reflections itself wasn’t the problem though, the problem was having a surface that was reflective, while also translucent, while also being fully shaded in terms of light, shadow, AO and stuff. I could go into the technicalities of why this turned out to be an issue for a deferred renderer, but let’s just say, it’s the way it is. (But it’s a really interesting topic if you’d like to learn a bit more about how engines render stuff, so it might be worth checking out!)
The first tests with somewhat promising results came to be when using alpha masked materials. The problem is that alpha masks are either on or off, so not usable when looking for a smooth Fresnel effect. I managed to work around that with some temporal dithering, but that was, well, dithery, and I wanted the water to look sharp, so was discarded.
Another option was to make the water surface forward rendered, which meant one more rendering pass, which meant it was going to be more expensive on the rendering budget: First, the scene gets rendered normally, but without the water surface, and then a second time, but this time just the water surface. The upside: now you have reflection data as well as image data from under the surface to distort.
I also added a little smooth fade based on scene depth to make the transition from above to underwater less sharp. (I initially wanted to add green plant stuff around the shores, and just accidentally plugged it into the opacity channel, then played around with it and figured it could look good, so combined it with the Fresnel opacity.)
Even though the swamp observatory scene did not exactly come out the way I would’ve liked, it was still a great learning experience, and I was already able to apply and improve upon what I’ve learned here in the next project! I know that one will be never done learning, but after each project, there will be new improvements to your workflow, and I’m feeling like mine is finally getting closer to where I want it to be.
For everyone reading who is also just at the start of their journey, I think I can’t sum it up better as the wise Shia LaBeouf did: Just do it!