Doesn't they say the same thing about photography when it was emerging? ;)
Agreed. This is just depressing and is a detriment to society. If this keeps advancing at its current rate, good art will be so trivial to generate that it won't be special anymore. Art will slowly morph into a banal distraction, with creating an original piece being as easy as applying an Instagram filter. The role of the human artist will change from a craftsperson to someone who picks a bunch of parameters, gives it to the AI, and chooses the best output. This type of technology is a threat to the very existence of art as a craft, will completely devalue artwork, and will make the journey of training to become an artist obsolete. I hate these researchers for what they're doing to a field that I love.
I disagree. There will always be demand for real artists. Like any other digital software, this is just a tool with the possibility to help artists create compelling worlds faster and add realism that would otherwise have taken days to make using other methods. As a 3D character artist, I would love to use this to create quick backdrops to place my characters in to enhance final renders.
Lars Laukens talked about his approach to setting up volumetric clouds in Unreal Engine 4.
What is a volumetric cloud in UE4? It’s a primitive with some cloud texture on it? How does it work
Volumetric clouds have 3 dimensions, not like regular cloud solutions such as skyboxes or HDRIs, which are 2d images.
There are several ways you could try to create a volumetric cloud, but in my research I used Ryan Brucks’s method of raytracing a volume texture.
This method is also inspired by the Guerilla’s paper on the subject.
It uses a custom raymarcher and in-engine generated pseudo-3D textures.
The raymarcher steps from the camera through a primitive (in this case a box) and samples the value of the 3D noise texture to build up an alpha, this alpha is then used to calculate lighting
In your research you’ve used a lot of the stuff from Guerilla’s presentation. Can you talk about the core points?
The paper by Guerilla’s Andrew Schneider definitely was the main inspiration for my research project. They describe the entire thinking process of how they got the results they wanted and why they did it that way.
Basically, most of the techniques they tried with Houdini were way too heavy for in-game use, so they ended up using a raymarcher, and custom layered noises and gradients to build up the different types of clouds, as well as color-ID maps to spawn different types throughout the map. They used a lot of math and real-world variables such as humidity and temperature to try and make their system as realistic as possible.
I think it was interesting for me to try and follow the path they took. To see what kind of problems they ran into, and how they got around them.
What way do you achieve the volume here?
These are the results of the raymarching shader with different volume-textures as input. Since UE4 doesn’t support true 3D textures, a workaround is used. The volume gets sliced vertically, and the slices are stored as frames in a SubUV texture.
The wispy cloud ball  and cloud box  were textures provided by Ryan in his examples, the column  is the standard UE4 smoke SubUV texture, and the last bunch of clouds  is with my custom generated 3D ‘worlin’ noise.
Using this raymarching technique you can easily tweak the overall density of the cloud and the alpha falloff to control the types of clouds you want.
The intensity of the lighting and shadows, as wel as the ambient lighting can all be tweaked in realtime with several parameters.
What’s the ray marching technique used here?
The raymarching technique I linked above by Ryan is the most vital part of the system. Someone on polycount linked me to his blog which i’m very thankful for.
But aside from the raymarching there’s a bunch of other stuff going down under the hood, so let’s have a look.
First of all, we need some volumetric noise textures. UE4 noise node in the material editor is basically the exact thing that we’re looking for. But you can’t really plug this is into the raymarcher, and even if you could, performance would be horrible.
A lot of the parameters of the noise node aren’t exposed, so you can’t tweak them in real time. You can circumvent this by using a custom node and just calling the noise code there.
This let’s us play with all the parameters in our material instance. This is fairly heavy on performance, but this is just for previewing purposes anyway. We can turn our custom noise setup into a material function so we can layer these on top of each other.
Here we see the master noise material. It layers 4 of our NoiseGenerator functions and has a ton of options to play with. You can change the scale of each layer, as well as their intensity and some overall brightness tweaks.
This material is absolutely not feasible for real-time use. I think it’s about 4200 instructions, so yeah, pretty massive shader.
There’s also some padding going on, to prevent obvious seams where the volume is tiled.
Here the texture goes from 2D to pseudo-3D. It steps through your noise volume and stores a frame at that z-value as part of your SubUV texture.
The tweakable material instance for ‘fast’ previewing.
If you’re happy with your final texture, you can bake these down to actual textures, which we can actually plug in the raymarcher.
Which is basically just tweaking some settings up and compiling your baking BP.
So these are the baked textures. On the bottom there’s 3 layered Worley/Voronoi noises at different scales. On top there’s a custom blend of Perlin and Worley (dubbed “Worlin” by Guerrilla).
And this beast of a shader is the final raymarching shader.
The cube setup in the beginning slices the primitive up and outputs where the ray entered the cube. After that, the entry position of the ray gets scaled and/or offset and fed through the texture “de-padder”.
Finally everything gets fed into the custom shader where the real magic happens.
These get split and separately get their contrasts tweaked (if necessary), after which they get fed into emissive and opacity in the material attributes.
The shader goes through a couple of for-loops (size of the for-loops and overall performance determined by Steps & Shadowsteps) and outputs the final density and light information in a float4.
What are the next steps for your development?
This was mostly a research project for my college to find out if volumetric clouds were feasible in UE4, so I didn’t have the time or resources to push this as far as I wanted.
But there were a couple of fixes and features I wanted to implement, if I ever find the time.
The tiling doesn’t work all that well at the moment, the shader isn’t really optimized for huge skies at the moment. The lighting gets calculated for 1 tile and just gets repeated when tiled.
There are also artifacts at the edge of the cloud volume, this is probably caused by the tiling issues I mentioned.
Zdepth is also an issue, the clouds currently render on top of everything.
I also wanted to layer more of the noise octaves and gradients on top, to get more realistic results, currently only the red channel of my RGBA texture gets used.
Assembling everything into an easy-to-use blueprint with a day/night cycle and correctly colored light/shadows would also be interesting.
Also, Ryan released a plugin for UE4 a while ago which contains all his shader and BP wizardry. So if I continued to work on this, I’d probably make it compatible with his plugin.
I’m also gonna link some sources and useful reading material here:
- Andrew Schneider, Nathan Vos – Advances in Realtime Rendering
- Ryan Brucks – Ray marched Heightmaps
- Ryan Brucks – Getting the Most out of Noise in UE4
- Ryan Brucks – Tiling Within subUV or Pseudo Volume Textures
- Ryan Brucks – Authoring Pseudo Volume Textures
- Ryan Brucks – Creating a Volumetric Raymarcher
- Ryan Brucks – Custom Per Object Shadowmaps
- Bitsquid – Volumetric Clouds
- UE Forums – Volume Rendering in UE4
- Cloudwise – Cloud Type Breakdown
Lars Laukens, 3D Artist
Check out his Gumroad page here.