@Tristan: I studied computergrafics for 5 years. I'm making 3D art now since about half a year fulltime, but I had some experience before that. Its hard to focus on one thing, it took me half a year to understand most of the vegetation creation pipelines. For speeding up your workflow maybe spend a bit time with the megascans library. Making 3D vegetation starts from going outside for photoscanns to profiling your assets. Start with one thing and master this. @Maxime: The difference between my technique and Z-passing on distant objects is quiet the same. (- the higher vertex count) I would start using this at about 10-15m+. In this inner radius you are using (mostly high) cascaded shadows, the less the shader complexety in this areas, the less the shader instructions. When I started this project, the polycount was a bit to high. Now I found the best balance between a "lowpoly" mesh and the less possible overdraw. The conclusion of this technique is easily using a slightly higher vertex count on the mesh for reducing the quad overdraw and shader complexity. In matters visual quality a "high poly" plant will allways look better than a blade of grass on a plane.
Is this not like gear VR or anything else
Miki Bencz gave a short talk on his latest 3D character study recreated from Julia Yang‘s black&white sketch.
The model is based on an illustration by Julia Yang.
I worked in 3ds Max and used the basic tools. I kept moving around vertexes until it looked right, starting with a simple plane placed on the bridge of the nose – basic “box modeling” stuff! Drawing a lot of heads, practicing anatomy and digital painting can help tremendously at this point since your eye can be trained to recognize what looks “right”. After I start painting the textures, I always have to jump back and fix geometry and model everywhere, since all the errors come to the surface, and it’s easier to spot the difference between the model and the illustration. Model, Texture, UVs, topology – everything is constantly changing (a lot) up until the moment the piece is published. You have to be efficient with the technical side of the things in order to fix these easily, so I’d suggest getting as comfy as you can with a non-linear workflow if needed. Each time I model a head I try and get it more accurate than the one I made before. It’s an endless iteration of knowledge, stylization, and skill from project to project!
Below you can see the route I took on modeling the head. Keep in mind the last one is not the final wire, this gif is to show the direction of progression only!
I usually approach painting textures by going from big forms to smaller ones, finding big patches of values on the original illustration which I can translate to my model. It helps me a lot to also notice the shape and gesture of the value patches since those are very important in capturing the essence of a painting style! I keep zoomed out in the beginning and move closer and closer as time passes.
Also, towards the second half of texturing it’s important to switch the viewing distance and figure out what big and small things you’ve missed. Getting back from the kitchen and seeing my monitor from far away helps me to notice the details that the illustration has and my remake doesn’t. Switching from the model to the illustration frequently is really important, too.
One key challenge was that I used the wrong brush for the majoirty of the texture painting, and the texture was blurry for a long time. It would have been more effective to start out with a messy brush that roughly has the same feel as the final illustration. Well, I learn something new with each piece and the project I’m working on now got this problem sorted!
Monochrome 3D Image
I keep my 3D work pretty simple in terms of 3D techniques and maps, so I only use diffuse and alpha masks on all of my works. I wanted to make a black and white image for a while since I felt that it could be pushed to the illustration look more easily. I thought that the brushstroke effects were a cool fit with the monochrome look and figured that it would work nicely in 3D if done correctly!
I wanted to keep the 3D version as close to the illustration as possible, so there is no dynamic light used. Everything is painted into the texture by hand. I think this is my weakest point at the moment. Figuring out a believable 3D light scenario for an illustration can be challenging both technically and artisticly, and can look pretty horrible very easily. I always experiment with “adding” extra light sources behind the model where the illustration doesn’t describe exactly how the backside should look because I think it’s more interesting to have multiple sources. Even though this is something I have not yet figured out a working formula for, the ideas and “that worked once, let’s try again” mood are already enough for a start. I can only recommend experimenting and sticking to the references, – in the process, you can decide whether that lighting works or not.
Would This Style Fit a Game?
A game with this style is most definitely possible. With some technical support, it can be done! The way I work takes too much time for a single asset to be produced, so the tech is needed in order to figure out a procedural way to achieve the effect and reduce the production time drastically. Every technique used, the resolution of the geometry and the size of the textures would work easily in any engine. Today’s hardware shouldn’t have trouble with a 2k texture and a 5k triangle head!
Thanks for reading my interview!