Technically, the artist needs to (and does) credit the author of the artwork he referenced and only mention what and where from the character is. Given that, this is a 3d/gaming/technical thingie-ma-jibs website that does not (and probably shouldn't really) reflect on the circumstance of the character itself, but concentrate on creation and techniques used in creation. The name of the character is referenced, but nowhere on the original art the name Sam Riegel is mentioned. As much as critter community is nice and welcoming, this part of "CREDIT THIS OR CREDIT THAT" irritates me. IMHO, Credit is given where credit is due. This 3d model was made with learning purposes only, whereas the original art is being sold. Instead of commenting "GIVE CREDIT" comment "COOL ART OF SAM'S CHARACTER" or "GREAT CRITICAL ROLE ART". All that said, this is an amazing rendition of the original artwork of the character of critical role. As a critter, I love both this piece and the idea of other critter being so talented! Peace, a member of the wonderful critter family.
You need to make it clear that this is an interpretation of someone else’s character and credit them (Sam Reigel, from Critical Role).
As great as this is, it’s not actually “your character” so you should really credit Sam Reigel of Critical Role who created this character, and make it clear this is your interpretation of it, because you make it sound like it was all your idea.
Digital artist Thomas Dotheij did a beautiful talk about his workflow. With procedural technologies, he manages to create all sort of objects: from landscapes and fur to bacon and smoke.
I’m a 3D generalist from Belgium. I studied there and worked for quite some time after I finished my studies. I mostly worked on architecture and design visualization, and as motion designer for European Parliament. After some time, I decided to try some new challenge in Canada, and moved to Montreal where I live now. I managed to get a job at Framestore as an Assistant Technical Director, I got the opportunity to work on some great projects here, among the ones I’m allowed to talk about, are ‘Beauty and the Beast’, ‘Fantastic Beast’ and ‘Doctor Strange’.
I discovered Houdini during the last year of my studies, during which I was trained with 3D Studio Max. I remember quite well opening it, and closing it, scared, right away. At the time (about 2009-2010) Houdini was in its version 9 or 10 if I remember correctly. There weren’t as many resources to learn it (today there’s an abundance of tutorials), and, as for today, not many packages it can be compared to. Anyway, I couldn’t help but come back to it. Thank god I did and discovered a package that suits me well. It is logical and more open in many ways than other tools out there. Houdini often gives you much more freedom (even freedom to break things badly sometimes).
Most of my works, if they seem to contain a huge amount of geometry, are either based on instances, or displacement shaders. Le mt guide you through the general process of creating a similar image.
This specific piece was created with the help of instances, and it’s pretty straightforward:
Lay down an instance geometry node, dive inside, lay down a grid node, and either scatter some point on it, or up the resolution of the grid itself if you want something more regular. This will be the point on which you will be instancing your geometry. On another geometry node, create a simple box. In the instance node, turn on point instancing and pick the box as the instance object.
At this point, you should be presented with a box on every point of the grid/scatter inside the instance node.
The real magic happens when you start playing with the attributes the instance node uses to transform the instances, mostly: scale and orient.
The management of attributes is one of the core concepts of Houdini, they are easily inspected through the geometry spreadsheet, and can be manipulated through various nodes, and even more interestingly, through VEX.
The instance node uses the scale and pscale attribute to determine the shape of the instance. Pscale is a single float attribute, that can be used to uniformly scale the instances, while scale is a vector3 attribute, for nonuniform scale variation. In this picture, I also used the orient attribute, which is a float4 attribute, describing a quaternion (3 floats to describe a rotation axis, and a forth one to describe the rotation).
What makes this pictures look like a landscapeы is the variation in height (the Y component of the scale attribute of this instances) which is driven by fractal noise. Manipulating noise in фт interesting way often gives a much more interesting picture. There’s no magical answer to how you’re going to achieve the result. Layer fractal noise of different frequency/amplitudes/types helps a great deal. Also, if you feed position as an input for your noise (there are other possibilities, like distances for circular patterns), think about transforming it in interesting ways, with some other noise.
How do the procedural objects work in 3d? I mean we all know it gives you more variety and saves a lot of time, but how does it all work if you’re trying to build a real asset and something that you could actually use in the production.
The way I see it is, when you try to build a procedural object, you’re actually trying to find a recipe. Once you’ve found the underlying concept, that describes what you are trying to do, you identify the output that makes sense for the user to provide. Sometimes it’s geometry and/or numbers. Then you pass those input to the various steps that turn those inputs into the output the users expected.
This is of course, a very general way to put it, in practical term, the first question you have to ask yourself is: “Is it possible to make it procedurally, and does that make sense to do it in a procedural way?”
In a real production scenario, asset often needs too much artistic input to have any sense to be built procedurally. Procedural-ism makes more sense when you have lots of variations of similar asset to produce (buildings, plants, rocks,…) it is no surprise that Houdini ended up being used so much in VFX: the recipe for an explosion is always roughly the same, and if you create an asset that you can simply modify the input of, you got infinite amount of explosion available. I predict a great future for Houdini Engine in video games, as it will allow game makers to produce more variation on asset quicker (a great example of Houdini use in video games is Planet Alpha 31).
It is really easy to create shader in Houdini, and it allows you to go pretty far in customizing the look of your render. It also allows you to create render passes very easily, which helps greatly in the post production process. It is very straightforward by default to get basic passes out of mantra, such as Z, P, N, and even every component of every lights separated in AOV, but when creating your own shader, you can also output any attribute you want into an AOV. It really helps building a solid bridge for the rest of the post production pipeline, even though, it is not unique to Houdini, it’s pretty well implemented and leads to pretty easy comping in Nuke, which I use to comp my renders.
You can build material from a scratch, using VEX, either directly in code, or by graphically connecting nodes in a shader. There are building blocks pre-made, over which you can build, or really start from scratch, if you have very specific shader needs. In term of textures, you can create 2D textures in COP context, once again, using VEX, to unleash various kind of procedural pattern and noises, but more than often, it won’t be necessary, since most of those patterns and noises are available directly at shader level (SHOP context in Houdini) for them to be piped directly in the diffuse colour, peculiarity, roughness, emission, etc of your shader!
And of course, you can use texture the usual way, by using actual picture files in shader pre-built, or homemade to accept this kind of input.
As for many things, Houdini was probably not imagined for that kind of use, but still open enough to generate texture patterns in COP. So, you won’t have something as efficient as Substance Designer from Allegorithmic right out of the box, but a dedicated Houdini artist might someday come along with similar results. Yet, as Allegorithmic and Sidefx recently teamed up, you can now have the best of both!!!
How do you think you could use some of the procedural things that Houdini can do in the game productions?
I foresee a great future for Houdini game engine, either for developers to create many assets quickly, or even, maybe, at some point, for players to customize stuff directly, who knows? You can already see pretty extreme examples of what can be achieved with PROTRACK by IndiePro for Unity (here’s a nice video about it).
Yet, procedural asset generation has to be done in a smart way, otherwise you mightget the kind of mistakes you get when you notice a tiling texture. As mentioned earlier, you always have to think whether or not it makes sense to make an asset procedurally… Do you need to scatter rocks all over a surface? Do you need your books to fill an entire library? Do you want them to all feel sort of different without having to do them one by one? Then procedural assets are probably the way to go!
On a much bigger scale you already can find entire level generator done in Houdini, depending on the kind of game, it may or may not make sense, it is for the game designer to choose if it makes more sense to spend time on finding a recipe to create some kind of asset, or if it may take less time, give better results to create exactly the assets they need in a more direct approach.
We can already use the assets generated with Houdini in UE4 and other engines, but when do you think we’ll be able to use something like smoke simulations, like water simulation and take it into the realm of interactivity?
If you think “interactive” as smoke simulating real time, I don’t think it’s the main focus for SideFX at the moment. Even though Nvidia recently showed very impressive real-time simulations, and I think this kind of technology is indeed coming our way, but there is a trade-off in term of precision and realism of those simulations. Many simulation nodes are already capable of harvesting graphic cards power through openCL, so maybe, at some point in the future, with the combined effort of graphic card manufacturer and software developers, it might become a reality. But for the time being, you still have to bake out your simulation, they can play in real time in various engines, but without true interactions.