Juuso Voutilainen speaks with us about his Ai-assisted Creation Workflow and how Unity ArtEngine has become the backbone of his creative work.
Hi! My name is Juuso Voutilainen and I am a CG-generalist. My industry experience ranges from being an Assistant Editor to Motion Designer, with some VFX-freelance works on the side.
Recreation of planet Earth for a recent documentary:
Nowadays, I have shifted more to freelancing, selling textures/3D models/HDRI on various sites. My current work focuses mostly on building entire scenes, where I try to compose elements and hopefully tell a story using the photographic techniques I have gained over the years.
One of the core themes in this article is Unity's AI-assisted creation tools, like ArtEngine and the Scan Beta. I started using ArtEngine a good while ago. Features like Seam Removal, Up-Res, PBR maps generation, and Mutation quickly became the backbone of my creative work, since I work with a lot of custom materials. Recently, I got the opportunity to try Unity Scan Beta, which I think is a very handy and versatile extension of the Unity AI-assisted creation ecosystem.
Usually, my themes are either fantasy with many natural elements and post-apocalypse with a lot of industrial elements. I can correlate these two digital art styles with my two main photography styles, which are nature and urban exploration.
When it’s a fantasy scene I'm working on, my workflow usually begins with building the character. As a lifelong fan of D&D, Dragonlance, and Lord of the Rings, I draw a lot of inspiration from these and various magnificent artists. I like to browse through ArtStation and see what kind of cool stuff people are doing and then start my interpretation. The character is usually a mixture of ZBrush/3ds Max modeling. Unwrapping is also usually done with ZBrush since it is so handy to do it while retopology is also built into the software. Quixel SUITE 2 and ArtEngine have been my choice for texturing.
When the character is finished, I do a rough form of the scene and place and pose the character. I like working with poses, especially as I think it’s a good way of communicating feelings/atmosphere that I try to convey in the story I have in mind with the scene. Texturing and environment work comes next. I like to use World Creator for large shapes like mountains, and I use Megascans models for more minor details, and usually for vegetation.
Texturing of the environment is usually from my own material library and premade that comes with World Creator and Megascans assets, though usually, they go through some form of look dev. Lighting is a big part of my scenes, and as with photography, I usually try to play with natural light sources. I have a custom library of HDRI’s that work as a base for setting the scene. I add sunlight to complement the HDRI, and some direct lights highlight some of the elements in the scene, the character, for example, and if there is a hero asset in the scene. After that, detail is added to finish the scene. I like to use fluid volumes for the mist to separate the elements and give an extra atmosphere boost.
For the post-apocalypse scenes, for example, an abandoned factory interior, I like to block out a scene using some artworks from other artists as reference or my own photographs. When the scene is blocked out, I start adding my assets.
My photograph from Montreal, from the Incinerator 3 abandoned factory:
I like to use as many assets of my own as I can, combined with premade assets like Megascans. In these scenes, I mostly use my own material library. I use a lot of repetitive elements in these scenes, like cardboard boxes, pallets, and structural components. Here, one of my favorite ArtEngine features comes in handy: Texture Mutation to vary the assets' look without making them too far apart from each other. Working in this industry is fascinating because it's evolving all the time, with more and more innovative solutions coming, for example, with AI-assisted creation. When I work with my scenes, I need to grasp multiple different aspects of 3D which makes the practice of digital art a continuing journey of discovery.
Derelict Village Project
My latest project is called Derelict Village, this is a fictional village called Rautalampi. It is a part of a sci-fi movie that is now in the post-production phase. It details a world that has gone through an apocalypse event of mysterious nature. The world is splintered, nature largely contaminated, and society has broken down into city-states with whatever faction having the most resources holding power. Rautalampi is a makeshift village trying to live outside the grasp of these city-states.
Here is a list of software I used during the creation of Derelict Village:
- Unity ArtEngine
- Unity Scan Beta
- Autodesk 3DS Max
- iCube R&D Group MultiScatter
- JangaFX EmberGen
- Chaos Group V-Ray
- Agisoft Metashape
- Foundry MARI
- Quixel SUITE 2
- Adobe Photoshop
Similar to how I described above, the scene started from looking at concept pictures. I looked through Google images of derelict cabins and shacks and started modeling them. Modeling was done in 3ds Max.
Reference images used for the Derelict Village scene:
There is an abandoned house and a barn near where I live, which were very fitting for texture acquisition for the scene. The weathered wood was what I was looking for since in the scene, I wanted to convey an idea of makeshift houses, largely unkept, some already collapsed. For the texture acquisition, I used the Unity Scan Beta. It is a mobile application for capturing and generating PBR materials right on the phone, and it proved to be very handy and fun to use. The application provides guides that make it easier to capture materials that can be made seamless later on. The format the application provides can be directly dropped into ArtEngine as a ready material, making it very handy to further edit/refine them.
Collapsed barn near my home, used for texture acquisition utilizing Unity Scan:
Some of the materials used in the scene, generated by Unity Scan Beta:
The trash piles on the ground are custom photogrammetric models. I did some repainting to them in Foundry MARI and used some additional models generated with the handy Debris Maker to make them into dioramas, which can be easily dropped to different scenes. The photogrammetry models were processed through ArtEngine, using Albedo/AO/Gloss/Roughness generation.
Photogrammetry maps generation in Unity ArtEngine:
Photogrammetry assets used in the Derelict Village scene:
All of the tree trunks in the scene use a slightly modified tree bark texture acquired with the Unity Scan Beta. It was mutated to a higher resolution with a ignore mask to remove some of the details that could give out the repetition of the texture. The ground is populated with Megascans assets, and the ground texture is from my material library.
Unity ArtEngine interface with the mutated tree bark texture:
Whenever I'm working on my scenes, I'm always attempting to compose them in a way that there is a contextual meaning and a sense of temporal progression. For example, in this scene, the building to the front left in the wide render could be a machinery repair shack, hence the piled machinery out front and other assorted items. The dismantled truck frame that can be seen in the other render links to this. The story could be the people in the village tried to build a working vehicle from the truck frame due to the scarcity of resources, or maybe they dismantled the truck to get the parts for other use, like to fashion a mobile generator from the engine.
Also, a very good way to increase the scene's believability is to scatter various equipment around or to the buildings' immediate vicinity. Oftentimes, buildings' surroundings are populated with, for example, wood blanks, roofings, and items like that. It helps bring out the image that there has been interaction with the building, maybe leftover equipment from the construction or the work that was not finished for some reason.
Materials Transport Scene
The Materials Transport is a scene within a fictional abandoned research complex, Sector 8 of Kallila Research Facility, in the same world as the derelict village of Rautalampi. Kallila was notable for its invention of TOUKO, a device capable of harnessing seemingly endless amounts of energy, bridging the gap between science and mythology. In the old tales of Finland, there was a device called Sampo, which produced resources seemingly endlessly. About 90 percent of the models and textures are from my own material library, processed through ArtEngine to make them seamless and enlarge them. A large portion of my material library was 4K, so using ArtEngine’s Compose Material and Up-Res nodes, I could upgrade the entire library to 8K. Some Megascans assets are used on the scene, namely the pipes running along the walls and the roof, and some decals. The AO map generated by ArtEngine was used extensively to bring out the sense of depth in the texture map, whereas also it was used as a dirt map to drive dirt to the crevices on the surfaces, helping in achieving a more realistic look. The damage decals seen in some of the walls were matched to the wall texture using the Color Transfer node, which takes out the guesswork of color matching by hand, for example, in Photoshop.
Matching color of assets using the Color Transfer node:
Building a scene like this starts out by blocking the rough shape of the space. I usually like to add larger objects, like the crates and cardboard boxes to get the overall balance and composition correct. At the same time, texturing the rough blocked out form (walls, floor, ceiling) and maybe add some rough larger shapes like pipes and air ducts. I have found that I use a similar compositional method in the renders as I do in my photography on abandoned buildings. If this is not something one does, studying such images or other renders gives good pointers on how to start building a believable scene.
My photograph of the Silo 5 complex taken in Montreal, used as a reference for the Materials Transport scene:
Materials Transport final render:
Using/simulating wide-angle causes perspective distortion, where elements nearer the camera appear larger and objects farther away appear smaller. This distortion can be used to convey a sense of space by positioning objects near and farther away. It is helpful to place objects that most likely nearly everyone understands more or less how large they are in real life and that they would be more or less similar in size in relation to each other. In this case, the wooden crates and the cardboard boxes are visual cues about space's size, in natural scenes, for example, trees are very good for that.
The elements in the scene are arranged in a way that it attempts to lead the viewer's gaze into the depths of the image to the corridor and hopefully to evoke a thought of "where it may lead?" The photograph of Silo 5 is composed in a similar way.
Industrial spaces often offer leading lines to compose images in this way, in the form of visible support structures and pipes, as in the above image a good example is the pipe running close to the roof further into the image. A similar method is used with the pipes running along the roof in the digital work. Also, a pipe goes from the pumping station situated on the left of the image, connecting it to the similar station on the right of the image in the corridor, offering a visual pathway across the virtual space. Yet, it is more subtle in digital work. It is complemented with other geometric lines, whereas in the photograph, it acts as a primary visual guide along with the feeder tube that runs from the other direction, leading the viewer's gaze. There can be distinct visual elements on the sides of the frame. Still, it helps to balance the composition and the visual guidance so that these distinct visual elements do not overpower the larger geometric forms.
Another thing I have found to work in digital compositions is to convey a sense of temporal progression. Similar to the render above, there are cardboard boxes with delivery notes attached to them and ready to be loaded onto the pump truck, but they are left there undelivered for some reason. Hopefully, this makes the viewer question “why” and the story behind space's derelict state. When a contextual meaning like this can be assigned to a digital render, it appears more lifelike for the virtue of having qualities exhibited in real abandoned spaces.
My photograph of an abandoned train carriage in Lapland, Finland:
The same idea is exhibited in this photograph I took from a Christmas-themed train carriage, a failed venture that probably still stands in an abandoned roundhouse. The packets and letters that sit at the counter undelivered evoke questions of “what happened and where the goods were supposed to be delivered”?
All of the cardboard boxes use a single texture mutated with ArtEngine to give realistic variety.
When working with industrial spaces, there are some elements that are basically “standard issues”. Some of them are pipes, air ducts, and other ventilation elements, electrical wiring, electrical boxes, various signs which are very commonplace in real industrial spaces, and thus it is easy to draw a connection to a space that could actually exist. Whenever working with 3D, there is an “error of perfection”, shapes that are created are by default perfectly smooth and edges infinitely sharp. I would say that attempting to reduce the error of perfection increases the believability of a render.
Here the capabilities of ArtEngine surface; there are no sharp divides in for example concrete walls, and the Seam Removal node intelligently removes such features. Industrial spaces often have repetitive elements, whether it be for example countless supporting columns, or aforementioned cardboard boxes, using the same texture to describe the surface of any such repetitive pattern quickly distracts from the feeling of reality since even the most precisely produced items have differences. Also, it can be too distracting to describe these features with texture maps that are visually too far apart from each other.
The Mutation node solves this problem, it can take any source material and quickly produce multiple variations of it. The Ambient Occlusion node creates highly accurate results and can be also used as a map to drive dirt in the crevices of a surface, further helping to achieve photorealistic visuals. It is also important to work with good source materials like photos because one can see how entropy expresses itself in abandoned spaces, where dirt accumulates, how metal rusts, and so on. These things are not something an untrained eye can explicitly point out but will notice if they are lacking. I believe humans have a subconscious understanding of how something is “supposed” to look. Nailing this “supposed” look, as well as using methods described above, point out causal relations that help the viewer to engage emotionally with the image. When emotional engagement is achieved, then images become more real because they feel more real.
Materials Transport final render, another viewpoint:
The Steampunk Dragon was realized by kitbashing. I combined models from my previous projects and laid them over on a dragon model I had created. I also took some inspiration from the amazing similarly named Steampunk Dragon by Kerem Beyit.
I created this model of the dragon and modeled the Steampunk Dragon using it as a base. I laid the kitbashing elements so that they would respect the form of this dragon as well as I could.
I wanted to see how far I can push a single material to texture a complex model like this. Using Unity ArtEngine’s mutation feature, I mutated a metal surface several times. These mutated materials worked as a base for when I fed the materials into Quixel SUITE, where I proceeded to texture the dragon. I wanted to use the SUITE’s curvature detecting features to get some dirt effects on the model for added realism. When I built a semi-procedural shader using a dirty ground texture overlaid over the primary texture, a second primary material was used. I could create a lot of small detail on the dragon. Since the dirty ground texture was made seamless in ArtEngine, the scale of the detail layer could easily be adjusted.
Source material for mutation:
Detail layer material:
The lighting of the scene was one light from upwards and an HDRI to provide the ambient lighting. The idea was to create lighting that would bounce off and highlight the hard surfaces of the dragon and let a part of the model be darker since the light was straight from upwards. The stark contrast with the clear, hard geometric lines hopefully creates a formidable atmosphere. The background is black. Some soft forms could have been there, but anything too complex would make the dragon hard to read, and the scene became too noisy in terms of visual readability. Whenever I'm creating scenes, I try to think of believability, making them as if they would exist in the real world, even though it would be something like a mechanical dragon. My primary idea was to try and create the model with the models I had, but I also tried to implement some fitting features whenever I could. For example, there are gears in the eye that would support the eye movement and gears for the jaw. There's a radial component that could work as an auditory device for the dragon. There are some components like shock absorbers and hydraulic tubes that could support the movement of the neck, the tongue is segmented so that it could move more organically. Along the head runs an exhaust valve that would expend the heat that is produced by all the components of the dragon.
Closeup render, left is without the semi procedural shader, right is with the shader on: