$16 for a *very* non-performant material? If this was intended for use in high-detail scenes, not meant for gameplay, one would generally just use a flipbook animation, or looping HD video texture (both of which are higher quality and available for free all over). I love options, but c'mon, that's pretty steep. $5, maybe. And you can loop in materials, using custom HLSL nodes. Also, there are better ways of doing this, all around. Somewhere on the forums, Ryan Brucks (of Epic fame) himself touched on this. I've personally been working on a cool water material (not "material blueprint", thankyouverymuch) and utility functions, and am close to the quality achieved here, sitting at ~180 instructions with everything "turned on". The kicker? It's pure procedural. No textures are needed. So this is cool, no doubt about that. In my humble opinion though, it's not "good". It doesn't run fast, and it's more complicated than it needs to be.
Lee is right - you can use a gradient effect when you vertex paint in your chosen 3d modelling platform (I've done it in max), meaning the wind effect shifts from nothing to maximum along the length of the leaf/branch/whatever.
I'm fairly certain you can vertex paint the bottoms of the foliage and control the movement using vertex colors along with the wind node. I did this in an earlier project and was able to create a scene with grass that moved less and less as it went down until stationary. I created the grass and painted the vertexes black to red (bottom to top) in Maya.
We had a chat with David Tracy, Communications Director of Chaos Group – a company that provides innovative rendering solutions for the media, entertainment, and design industries. He talked to us about the history of Chaos Group, what his company was showcasing at SIGGRAPH, and he gave some words of advice on rendering.
Chaos Group has been around for 18 years. We started out with our first plugin for 3ds Max and where we really grew in the beginning was in the architectural and visualization industries. It was a very dependable and beautiful output using very natural lighting, which was perfect and fit very nicely into an architect’s workflow. It also helps with interior shots.
It grew from there, and we started to adapt and grow into other programs. It was around 2007, that we got into researching on how to render using just the graphics card. We thought it was a viable technology and something worth working on, and that became something called V-Ray RT. It’s something that is implemented and is a part of many of our V-Ray Products. So when the customer purchases V-Ray, they actually have access to different versions of it depending on how they want to render and what they want to render because some scenes are going to be great for using a progressive renderer (GPU), or you’re going to want to use the CPU on your hardware.
So we started integrating that into the V-Ray hardware product line and then in 2010 we came out for V-Ray for Maya. That’s when we started to grow in leaps and bounds into the VFX broadcast (film, television, things of that sort). A lot of our games stuff comes from 3ds Max. Max has been such a staple within the games industry and it’s been that way as long as I can remember. It’s always plugged in nicely and natively into 3ds Max.
In Max 2016, the newest version, we actually worked on their physical camera in the Max program which was something that Chaos group was involved with.
Due to our base and because we span across so many industries, it’s very easy for studios to bring us into the pipeline and bring in artists with different backgrounds. It really doesn’t matter what program they choose to model and animate in, they know they’re going to get that consistent output with V-Ray and everyone’s familiar with V-Ray already going in. It’ s been a very natural fit to get into the biggest of studios, to the smaller studios.
You have to break it down by industry. We’re still heavily embedded in architecture, but also we still have VFX broadcast and games. It also goes into jewelry design, furniture design, etc.
In terms of the user base, you have a pretty wide range of experience levels of rendering too. Your SketchUp and Rhino person is not going to need to get into the nitty-gritty of rendering as much as someone who’s working in VFX on a movie like The Avengers. It’s a wide range of experiences and backgrounds and it’s really important to us that we fit into the industries well. We want that familiarity no matter what industry you’re coming from.
Within games, the biggest thing we’re known for is cinematics and advertising. For example, we’ve done the cutscenes for the Batman: Arkham series and the Elder Scrolls online series. We also did all of the advertisements for Assassin’s Creed Unity. There was also a few studios that have worked on various advertising campaigns to promote that and they used V-Ray on those projects.
Furthermore, in the last two most recent versions of V-Ray for Max and V-Ray for Maya, we actually made a lot of improvements for texture baking.
We’re showcasing V-Ray for NUKE and virtual reality (VR). There are different levels of VR and there are different uses for VR. It’s something that we are invested in from a research standpoint and adoption standpoint on multiple levels. So you’ve got the level where you want to have the animation and you want it to be something easy from VR to texture bake it, and then bring it to an immersive experience with animation and a moving camera. That’s something we support.
Then you get to the high-details with the interior shots. This is where you can control the perspective by placing the camera where you want in a specific place in the scene. You could set it at the height of someone seated or someone standing, fire off a render, and from there it’s literally a drag and drop of PNG or JPG files into the right folder on your Samsung Galaxy 6 phone. After that, you put it into your Gear VR and boom, you’ve got this amazing VR image where you can completely turn around, look up, and look down in. You’re not going to see scenes, you’re going to see a really nicely done environment.
We have two different camera types for VR. There is a Cubic and Spherical map. Both have their advantages and disadvantages. The Cubic map is of higher quality and you don’t have a pinching of the images at the poles. That’s an example of a high-fidelity image of a V-Ray render going into VR.
We’re partnered with NURULIZE. There is no game engine involved with what they’re doing, but in their render you can still crouch, tilt your head, and walk around in a 10×10 foot space. They have a sample scene of a car showroom where you can actually sit behind the wheel of the car and look underneath the car. It’s about as deep as it gets for VR.
The Uniqueness of All the Chaos
What separates us is how our integration works. For us, it’s a matter of pride in how it’s not just an import/export option for supporting a 3D application. We want to fit in seamlessly within the interface and follow the same workflow and logic of the host application so that if you’re a NUKE user you’re going to be looking at the exact same interface you’d be looking at everyday. However, now V-Ray is going to be node-based. For example, for V-Ray for Modo, it’s going to fit right where you expect it to be in Modo itself for its native renderer.
That’s something that we spend a lot of time on to make sure that it’s easy for people to adopt, but at the same time it’s consistent across all platforms. V-Ray for a Maya person can work perfectly well with a V-Ray for Modo person, and you’re not going to just have to do an output into a generic rendering with limited choices. You’re still going to get all of the functionality – the biased and unbiased capabilities of our renderer.
We’re not just a CPU or GPU based renderer and because of the wide range of industries that we support, we don’t want to limit people’s choices and workflows. We’re constantly working to streamline the CPU and GPU based rendering. We have really great partnerships with Intel and Nvidia to further that and push their development as well. We’re very invested in the research aspect.
We want to make sure we’re solving people’s problems instead of having the buzzword of the moment. A lot of what drives us is what artists and designers have asked us for.
So much of what we did in the past was through coverage, word of mouth, case studies, and projects. For example, if a movie used us, they’ll name drop us. We’ve done advertising in the past as well and we have a pretty active social media.
We also have a very vocal and amazing user base that is just so phenomenal at showcasing their work and being supportive as well.
At the office we’re always sharing with each other some new works that the community has provided us with, and we’re constantly amazed at what they can do.
Words of Advice
Something that helped me with rendering was being more aware of my surroundings, especially when it comes to light and how light interacts with things. For example, you’ve got your glass of whiskey and you see this really beautiful effect on the table cloth, or maybe see micro scratches on your Macbook – you then think about how you would texture and shade that. Having that eye for your real environment and then bringing it into a digital world is going to make things jump out at you when you’re working on a rendering.
I think it helps to pay attention to what is constantly going on around you with light, texture, and materials. I think it’s fun to take it upon yourself to see what is going on around you and replicate it. It’s very easy to draw something off your imagination. However, I think the best thing is drawing from real life, and once you do that you can break the rules and create a whole different level of art you wouldn’t have been able to do before, and I believe it’s the same for rendering in some ways.