logo80lv
Articlesclick_arrow
Research
Talentsclick_arrow
Events
Workshops
Aboutclick_arrow
profile_loginLogIn

Exploration of VR-Based Workflows

Zrinko Kozlica talked about his exploration of VR workflows: utilizing Gravity Sketch, Microsoft Maquette, combining traditional and VR tools, and more.

Zrinko Kozlica talked about his exploration of VR workflows through the test model Citroen 2CV: utilizing Gravity SketchMicrosoft Maquette, cleaning up meshes, combining traditional and VR tools, and more.

Introduction

Hey, my name’s Zrinko Kozlica and I’m a self-taught Senior 3D/VR Artist currently working and living in Amsterdam.

I started out 2015 in the industry as Junior Character Artist at Guerrilla Games, working on Horizon Zero Dawn, which was really an amazing opportunity that was as challenging as it was rewarding. It was definitely a very special experience to work on such a big title, and I’ve learned a lot of invaluable lessons there, both professionally and personally.

After Guerrilla I moved on to embark on the still very early days of the Virtual Reality journey at Force Field VR in Amsterdam. It’s an equally amazing opportunity to be part of the founding years of very exciting technology, that whilst still in its early days, is definitely going to be much bigger in the future than it is now in my opinion. From a creators point of view, it’s quite different from traditional game development and also leaves room for redefining the disciplines involved in the process. As an artist working in VR you really have to look at game development in a more holistic way, and working in smaller teams helps to keep agile and participate in a more generalist manner. I get to do lots of different things, ranging from Characters, Props, Environments to Lighting, Prototyping and even a little bit of Game and Level design.

Citroen 2CV

1 of 2
1 of 2

The Start

The project started to take shape shortly before New Year. I was just starting to investigate how to use VR at home for content creation since I’ve been making content for it for some years now but never really actually used it to create something. At that time I also noticed that there’s a lot of really nice old cars around my area. I just got inspired by a particular one, which is also on my reference board, to use it as modeling practice. Oldtimers really have something fascinating in their design, perhaps the lack of restrictions and room for experimentation, which makes some of them look more like pieces of art rather than a practical thing.

My idea was to use it to learn Gravity Sketch with it, but I didn’t really plan to do anything else with it or even make it a whole project. However, as soon as I started with it I really wanted to push it further and learn other VR tools along the way. Then I also saw all of it as a good opportunity to learn more about new rendering software, in this case, Arnold for Maya.

So bit by bit I turned a simple modeling practice into a whole project, with the goal to try out a new workflow that frees me a bit from the constraints of traditional 3D ways.

1 of 2

Crafting Citroen with Gravity Sketch

The idea to use Gravity Sketch came after seeing some of the very inspiring and amazing work of Jama Jurabaev. He’s really a pioneer in VR based workflows along others like Gio Nakpil, Goro Fujita, and Martin Nebelong. Luckily, he shares a lot of his knowledge on Gumroad, Learnsquared, and Youtube, so that’s where I started from. I definitely highly recommend the tutorials and artwork of all of them if you are interested in VR workflows or simply want to get inspired.

His breakdown of how he uses Gravity Sketch really awed me with how intuitive it seemed for him to create rather difficult shapes. Gravity Sketch also seems to have been made with industrial design as the main focus, so modeling a car with it felt like a no-brainer.

After the first hours trying it out I realized that, as Jama mentions it in his videos, it’s the future. Standing on your feet, immersed in a clean void with just your art in front of you, actually using your hands in a natural way, makes you realize how restrictive the traditional mouse and keyboard work feels in comparison. It’s a long way from replacing traditional tools, the same way as digital art hasn’t replaced traditional art, but it’s a great enhancement. Especially the spatial awareness and freedom of movement to judge your model quickly from different angles is something that you can’t get from a flat screen. For example, I would work on the model in a roughly 1:10 scale right in front of my face, and then quickly scale it up to a size and distance so that it felt like a car that I am standing right next to. This enables you to judge the model not only with your artistic eye but also compare it with all the knowledge you have about objects in real life.

1 of 2

Gravity Sketch has a very hard-surface modeling-focused toolset, and the way the touch controls are integrated makes it a lot of fun to use too. Once you learn the toolset after a couple of hours the only limits will be imagination and practice time. What I find really cool about it is that you can go super perfect with the shapes and make it really look close to a cad model, but you can also keep it fairly loose and illustrative.

For the Citroen, I felt like making it something in between would be the most fun. It’s like with sketching, where the shapes and values can be defined quite fast, and everything else is basically polishing. The loose way also seemed to capture the character of the car better than super neatly lined up surfaces.  

Utilizing Microsoft Maquette

Microsoft Maquette appears to be, even in a relatively small community like the one of VR content creation, still relatively unknown. It’s actually a really powerful and amazingly intuitive and quick tool for rapid prototyping of VR environments.

The idea of it is to give you instant feedback if something works in VR or not. The traditional workflow would be to concept and model something like you would for a traditional game. Then you would iterate in let’s say, Maya, without really feeling the scale of something, and export it to Unreal. Wait for the import to happen, then place the Prop where needed. Then you press play and notice things need to change a little bit or don’t work at all. Rinse and repeat.

This process, especially when prototyping, can be very time consuming and Maquette makes this part of the workflow really much more productive. For example, quickly testing out what a room requires to have a lot of parallax and a nice sense of scale and depth. With Maquette, you’d just scale up some cubes to room size, and start placing primitives like cubes, cylinders or spheres where you think something should be and you can already instantly tell if it works or not.

The toolset also allows for importing custom meshes, which makes it perfect for quickly kitbashing environments or props. You can even have a spectator on your desktop, who’s represented by a little flying drone in VR. They can point a laser to things they want to show you and like this, you can interact with for example the Level Designer you are working with.

For my project, I really enjoyed the possibility of assembling a world around the car, which was imported from gravity sketch, and try out possible camera positions and change the composition. It’s really nice to be able to select multiple objects with a swipe of your hand and move and arrange them like playing with toys on a desk, and then teleport to a camera position that scales it up to real life size.

The important thing is to realize what Maquette is and isn’t. It’s definitely not a modeling tool like Gravity Sketch, nor does it enable you to test out gameplay features. But when used correctly it can speed up those processes immensely.

Cleaning Up the Meshes

The integration in a production pipeline is definitely still a big point of improvement for all the VR tools and surely is the next big step. For this project, I spent almost more time on finding the best way to utilize the meshes for further work than on creating them.

Some of the problems that I encountered were lack of proper grouping, one-sided surfaces exactly on top of each other, no or broken UVs, the number of objects (Gravity sketch makes each stroke a separate group in an FBX, which is useful but can be really heavy on Maya).

Therefore cleaning up the mesh was a lot of manual labor. Deleting unwanted surfaces, grouping meshes, reducing polycounts. That was all done in Maya. Adding more details is not really different from the way you normally would work on something in 3D. The car besides some cleanup is only edited in its width in Maya, alongside some smaller adjustments to close some gaps.

The environment turned out to be surprisingly problematic to clean up and prepare for rendering since I used a lot of separate meshes which made the process in Maya a bit slow.

The next problem I encountered was that contrary to my expectations even automatic UVs would keep crashing Maya or not really ending up in a useful layout. This really influenced and limited my choices for the rendering stage.

Regarding this stage, I can say that it highly depends on the type of project how easy or complicated this part of the workflow is. In my case, I was OK with the final result being a static, non-interactive piece that would be procedurally textured and pre-rendered.

If you want to use a mesh in Unreal or some other real-time software I highly recommend thinking about the best way to do that beforehand, otherwise, it might turn out to be a very frustrating experience to realize that you can’t use your nice work the way you planned to.

A step in the right direction seems to be the integration of the GLTFand GLB formats as an export option in some of the VR tools. This is a format that, for example currently in Maquette, enables you to export the scene directly into Unity whilst retaining all the lights, materials, groups and mesh names.

Somehow all of these unclarities and restrictions make finding solutions to them part of the fun.

There are some great and inspiring examples out there, like Goro Fujita’s Oculus Quill to Octane render workflow, of how to utilize the VR tools in a great way.

Texturing & Shading

The shading and texturing part was quite trial and error heavy in the beginning. I was trying different approaches, but due to some shortcomings of the workflow, I decided to go with a non-UV based rendering solution. After some research, I found out that Arnold Render has a really powerful and quite easy to learn and use Toon material.

Luckily most of it works procedurally and in combination with some triplanar mapping, it was a lot of fun to explore different style directions. The lighting is a simple physical sky with an HDR image hooked up. The biggest challenge here was to find a right balance between still three dimensionally looking surface shading and contours and little details that you’d expect from a cel-shaded look. All the little dots and scratches are actually a triplanar mapped bump map that gets fed into the shader. Based on a threshold setting the contours pick up more or less of that detail. This is a really fast and flexible way to add more detail to a rather clean base pass.

The color palette of the environment started of rather traditional and muted, which felt quite boring, so I went for a quite vibrant range. This like many other stages of this project was also a way for me to break free a bit from my past artwork, thematically and stylistically.

The official Arnold documentation includes most of the necessary knowledge that I relied on. On Youtube, there are a couple of really handy tutorials too. There’s endless potential to tweak and optimize the shader to one’s special needs, but the basic techniques needed for that can be learned in a day.

Material Settings

Combining Traditional & VR Tools

Currently, there’s such a huge pool of amazing software solutions that can be utilized that sometimes it’s not easy to settle on some of them and stick to the decision at least for one project. Again, Jama Jurabaev is a great inspiration here, since he really advocates that as long as you get fast and good results it shouldn’t really matter what you use.

In my case, it meant that after some research I found out that Blender has a great built-in Tree generator called Sapling Tree gen. It comes with a lot of customizable settings and is really easy to use, besides Blender itself is also shaping into an amazing tool itself recently. I think it took me one hour to figure out how to use the tree plugin, which was really great since I wanted to create a scene and not focus on how to make the best, game-ready tree.

For assembling the scene and rendering it I stuck to Maya since that is the tool I use daily and learning Arnold, VR tools and Blender was already enough new stuff to learn alongside actually creating the artwork.

I think the main takeaway I have from this is that the more ways you find to get the results you want the less you’ll focus on the tool itself. In the end, the goal is to create Art first and foremost, and especially for personal pieces, there are no limits on how to combine traditional tools and VR tools.

I’ve seen some great examples of Oculus Medium sculpts being decimated and then rigged and animated with Mixamo mocap data. Or Quill animations rendered out in Octane render (Goro Fujita) which makes the artwork look super tangible. To me, the combination and implementation of VR tools into existing workflows is a really exciting, unexplored area, which is almost as much fun as creating the work itself.

Real-Time Rendering

The initial idea of the project was actually also to create a small animation out of it. With some more work invested in cleaning up the meshes that come out of Gravity Sketch and Maquette, there’s definitely no reason why real-time rendering would not work.

The main reason I stuck to Arnold was the possibility to procedurally texture everything with a Toon shader, and therefore avoid the problem with the UVs that I described earlier. Actually, one of the first test renders I did outside of Gravity Sketch was with Marmoset Toolbag, so the GPU could have handled the scene fairly well. I think for some future projects, I want to learn how to use Blender Eevee which is an amazing new real-time render solution.

VR Tools Potentiality

Currently, I see the various VR tools as a perfect addition to existing packages. For VR content creation I think it will become ubiquitous to use VR tools for prototyping. It will only help to emphasize the unique features of the technology. For traditional Games and Movies, it’s also a great addition, especially for creating concept art or quick mockups of assets and environments.

With traditional 3D workflows and packages, you also carry all the past experiences, good and bad, with you each time you start them up. Everything reminds you of optimization, lots of clicking, sitting way too much, complicated mechanics, all the possibilities. Compared to that the experience in VR tools feels very focused and clean, which makes it easy to just jump in and create something without the urge to make it perfect. And besides that, it’s just fun to move around a bit while working, which actually makes it feel closer to traditional painting or sculpting. Compared to a traditional workflow working with VR has, in my opinion, the big advantage of enabling people to tap into their creative potential in a very relaxed and curious way. The tools so far, no matter how sophisticated, are being built from scratch with VR in mind. This means that this initial feeling of being overwhelmed, at least for me, is way lower and therefore makes it easier to just jump in and let you enjoy creating art. It’s also great to see people that don’t know much about 3D tools picking up something like Medium and getting lost in sculpting. That shows how much potential there is to all of this.

Personally, I am really looking forward to the coming developments and can’t wait to see what the future brings.

Thanks for reading!

Zrinko Kozlica, Senior 3D/VR Artist

Interview conducted by Kirill Tokarev

Join discussion

Comments 0

    You might also like

    We need your consent

    We use cookies on this website to make your browsing experience better. By using the site you agree to our use of cookies.Learn more