logo80lv
Articlesclick_arrow
Research
Talentsclick_arrow
Events
Workshops
Aboutclick_arrow
profile_loginLogIn

How to Develop Effective Software Tools

Technical Artist Paul Ambrosiussen talked about the way effective game development tools for different software solutions are made.

Technical Artist Paul Ambrosiussen develops great and useful tools for different software solutions including Houdini, UE4, and Maya to optimize workflows for gamedevs and artists. In this article, he kindly shared his Tool Development process using the example of the Detail Mesh Tool for Houdini.

Introduction

Hi there! My name is Paul Ambrosiussen and I’m a 3D Technical Artist focusing on Tool Development to support effective art pipelines and optimize tool and workflow quality for the artists around the world I’m working with. I studied International Game Architecture and Design at NHTV University of Applied Sciences in Breda, Netherlands. While there, I found my passion for helping others create amazing things in a better, faster and more flexible way. Which is exactly what I am currently doing at my job at SideFX with Houdini. My responsibilities partly include creating tutorials, giving live lectures, attending events and presenting custom workshops to customers.

In this article, I’d like to run you through the process of how I develop tools, designed to optimize workflows for game developers and artists of all types. I’ll be using a specific tool – the Detail Mesh Tool – as an example of the typical process. 

Note: the full write-up is soon to be published online, so keep an eye out for it!

I was recently invited by GDE to give a masterclass on something that would allow developers to create content much faster using Houdini. After some brainstorming, I decided it would be cool to build a tool that allows you to quickly convert a low poly blockout into a mesh that has a lot of detail already. Depending on your requirements it could either already be your final mesh, or be used as a starting point to build from (but already be 80% of the way). The event attendees included developers from some of the top Polish game development studios. Since I noticed a lot of interest in this type of workflow, I thought a little breakdown might be interesting.

Ideas / Workflows

So the tool started out as a way to quickly project geometry onto other geometry but turned out to be something much more. I initially got the idea by talking to an artist at Naughty Dog, who asked if it was possible to do something like that in Houdini. First thing I had to do of course is to build a cool prototype. After some brainstorming, it essentially evolved into the following:

Imagine you have a Low-Poly Wall object, and you want to make it look like it has been built out of bricks. Some typical approaches could be 1. You could Sculpt it, and then decimate. 2. You could kitbash pieces of geometry onto the low-resolution blockout. 3. You could apply a heightmap to the blockout and use adaptive tessellation. So what if we could combine technique 2 and 3 into a more procedural approach? That’s what the tool we are building is going to do for us. There are two things we will essentially need to start with, and the tool will do the rest for us.

In the image above we can see roughly what the tool will do for us. It essentially combines the two pieces of geometry and outputs the result we see on the left. Since we’re working in Houdini we can swap out either one of the two at any given time, and the output will automatically update. This also means we could keep tweaking how the input geometry looks like until the very last step where we export it to our Game Engine. Unless we use Houdini Engine… But more about that later.

After I showed this to some people, the smart people at Quixel also came up with the idea of using this in combination with Photogrammetry for world building. Imagine you’re scanning a landscape or a big building with a drone. You wouldn’t be able to capture all the micro detail with its camera, but would definitely be able to capture the larger shapes. So after meshing that larger detail, you could project a scan of micro details onto that more rougher mesh to get something which looks believable.

Creating A Detail Mesh Tool

So now that we kind of understand what the tool could do for us, let’s get into how it works. Before we do that, I would like to talk a bit about how I approach building bigger tools like this one. It’s important to have a clear design for what you are going to build since you otherwise might be adding features that cost a lot of time to build, but aren’t required for that specific tool.

Design Document

Before you start working on bigger tools, it helps clearly defining what it is you need. This will help you break it down into smaller tasks. I’ve written the following down before starting:

Overview

The overview should just be a small summary of what it is the tool does, why you need it, and who will be using it. It will not only remind you of what it does but also allows you to use that in any documentation you might write for the tool. This should be no longer than 100 words.

User Stories

The user stories help you understand what it is a user wants to do with the tool. It’s important to understand that the user stories do not directly define the features of the tool. You can write down any crazy ideas you might have in the following format:

As a user, I…

  • Would like to be able to feed the tool multiple “Mesh Textures”.
  • Would like to be able to convert blockouts into more detailed meshes.
  • Would like to be able to paint the intensity of projected Mesh Textures.
  • Would like to be able to use the tool inside Unreal Engine and Unity.

Note how I underlined some key elements of the things found in the user stories.

Features

The features actually describe what the tool needs to do. This is a list compiled from elements taken from the user stories and additional requirements. Not every user story will end up in the feature list! Be sure to prioritize them, so that you can get to your prototype as efficient as possible.

Research

Since we now know what features we need we can quite easily figure out what aspects of the tool we might need to learn more about. It could be that you as an individual might not yet have the required knowledge to build certain features yet. That’s what you will try and obtain in this step.

So for this tool, I decided to do most of it in VEX since it’s really powerful, and will clearly show what the different elements of the algorithm will do for us. Note that I could have also gone with a visual / node-based programming approach. It really depends on what you like using.

Looking at the core concept, we essentially need to project geometry onto each other regardless of its orientation. How do we do that? Let’s try and build that functionality in a simplified example.

 

On the left, we see before projection, and on the right, we see after projection. The right is what we want to achieve since it will allow us to project any geometry onto something else.

To do this, we need to calculate something called “Parametric Coordinates”. The yxzdist() function in VEX allows us to do that. Find more info here.

1 of 2

First, we are going to make sure both our leaf and grid mesh are aligned on top of each other. We want the leaf mesh to somewhat float above our target (the grid). Once we have that, we are going to cast a ray for every single point on our leaf (visualized with the red arrows). The ray will start at the point position and will keep going downwards until it hits our grid. Once it does that, it will calculate and store the “ParametricUV” attribute data seen in the spreadsheet.

Parametric Coordinates are similar to UV coordinates but cannot be modified, and always have a hardcoded value (see image above). This allows the mesh to be deformed without those “surface positions” changing. So as soon as we know at which Parametric Coordinate we want to “glue” something to on the original target mesh, we can later on look up where that surface position now is located in worldspace. We are going to exploit that in our Detail Mesh logic.

Now that we have our Parametric Coordinate or “surface location”, we can look up where that is in Worldspace for our deformed mesh. With the primuv() VEX function, we can ask Houdini to give us the corresponding World Location for the given Parametric Coordinate. 

Here I am visualizing where a given Parametric Coordinate is located in World Space. Regardless of what we do with our mesh, it will always be on the exact same surface position of our target mesh. Very useful for our Detail Mesh, since we will be folding our mesh.

Development

So now we know how to project geometry on top of each other when they are both flat and above each other. How does that work with a 3D mesh? Pretty easy with a grid and a leaf, but it might be more difficult with something like a wall. There is actually a pretty clever thing we can do to “flatten” our wall object. And that is by “unfolding” our mesh into UV space, essentially flattening it.

This can be done fully procedurally by simply telling every point/vertex that its world position will now be equal to its UV value. Tadaa, easy unfolding.

After we unfolded our mesh, we can now overlay it with our tiling brick mesh. In order to be able to properly “fold” it back together, we cut out our “paper” shape and discard the rest.

Once we have that, we can apply our previously explored leaf and grid logic, and calculate our Parametric Coordinates using xyzdist() again. Once we have those, we figure out where that surface position is located on our original wall object (like the deformed grid in our example). And calculate where the Worldspace Position is located for our Parametric Coordinates for every vertex using primuv(). Once we have that, all we need to do is set our vertex positions to that, and we have our brick mesh in the shape of our wall.

Test Case

To test the tool I decided to build a realistic use case. While on vacation in Saarburg, Germany, I took a nice picture of a castle and decided to use the tool on it.

On the left, you can see the original picture I took. Next to it is a really low-resolution blockout I threw together in Houdini using nothing but a cylinder and a couple of boxes. After that, I just gave it an automatic UV-Unwrap and fed it to the tool. The raw result can be seen in the third image. The fourth image just has some vertex color, multiplied with some basic calculated Ambient Occlusion.

Houdini Engine

Houdini Engine is really powerful. It brings the power of Houdini into other applications such as Maya, 3ds Max, Unity, Unreal, distributed farms – and even proprietary applications using our API. You can build a cool tool inside Houdini, and Houdini Engine will make it work the same way in all those different applications. For example, imagine you’re using Unity for your project, and have built a lot of custom Unity tools using C#. What if one day your studio decides to switch to Unreal? Then all the tools you had made specifically for Unity become somewhat useless since they don’t work in Unreal. If you had made those tools using Houdini and Houdini Engine, they would still work regardless of what application you are going to use in the future (assuming HEngine works there).

Secondly, HEngine allows you to keep your content “live” and procedural all the way until distributing your product. An example would be generating a road on a game landscape. A typical non-procedural workflow would look like this: Level-Design builds a blockout and sends it to the Art-Department to make it look pretty. Once that’s done it goes to the QA-Department for testing. If it then turns out the road needs to be moved somewhere else, the Art-Department might need to completely redo it. After which it goes back to QA again. And that cycle keeps repeating until its either perfect or time is up.

The procedural approach using HEngine would allow the Level-Designer to build this road using a spline in their game engine, and HEngine would generate the art for it using the rulesets the artists have defined. Once that spline moves, the art gets automatically adjusted to fit that change. This gets rid of the artist step after the initial art pass, reducing the amount of time required to iterate on a design. But if you want to know more, check out the HEngine page describing it in full detail here.

Or watch this great video by Robert Magee about using Houdini Engine in Unity:

SideFX Houdini GameDevToolset

The Game Development Toolset for Houdini is a series of tools designed to assist the workflow mainly, but not exclusively, Game Developers. It contains Shelves, Digital Assets, Custom Desktops, and Scripts. It’s being developed by Luiz Kruel, Michael Lyndon and myself. Be sure to check Luiz’s and Michael’s SideFX profiles, as they have some really awesome tutorials and presentations on there.

Marmoset Exporter – Create MViews or renders using Toolbag straight from Houdini

Every game has very unique challenges for creative content to overcome, but usually, most of those revolve around the same old techniques such as texture sheets for explosions, rigs for destructions, vertex colors & UV-channels for simple shader motion, flowmaps, LODs and more. So we’ve been making tools to help optimize those common workflows. So game developers and artists can focus on the creative space, and let most of the (if not all) technical challenges to be handled by the toolset.

OSM Tool – Import map data such as streets and building profiles with ease

We’ve seen a massive adoption of this toolset by game studios around the world. They can really feel the benefit of using Houdini for their day to day work thanks to some of the advancements SideFX has been building over the years. Our GameDevToolset is also considered to be a really important factor since it provides ready to use solutions – for free! Everything our Games team builds is also open source, so you can go in and learn from it or even modify it to suit your exact needs. We also always release tutorials for our tools, which can sometimes be found on 80.lv as well!

To get started I recommend watching this GDC talk where we cover some of the tools, including how to install them:

About the Artist

 

In addition to having many of his games showcased at important industry events such as the Nordic Game Jam, Game Developers Conference, and AMAZE Festival, Mr. Ambrosiussen has also been invited to lectures at these events, as he did at the 2018 Game Developers Conference (GDC), the world’s largest professional game industry event, where he spoke about incorporating high-quality creative experiences and visual effects to games.

Check out some of the awesome tools developed by Paul for Houdin:

Paul Ambrosiussen, 3D Technical Artist at SideFX Software

Join discussion

Comments 1

  • Eric

    Very clever!

    0

    Eric

    ·6 years ago·

You might also like

We need your consent

We use cookies on this website to make your browsing experience better. By using the site you agree to our use of cookies.Learn more