How can you make planets? Is it hard
This is a great post on star wars. Our cheap assignment experts are really impressed. https://www.dreamassignment.com/cheap-assignment-help
Thanks for sharing, the lighting on the wheels and coins is beautiful, very painterly.
We met with the CEO & Founder of Fabric Software, Paul Doyle, and he spoke with us about the new Fabric Engine 2 that was announced and launched at SIGGRAPH, working at Autodesk in the past, and VR.
I’m Paul Doyle, CEO of Fabric Software, and we have a product called Fabric Engine. I founded the company 5 years ago with ex-Softimage and ex-Autodesk engineers and product managers. Our intent was looking at how production was changing and the traditional approach of single DCC applications like Maya, 3ds Max, and so on, was not really reflective of how companies were working. Companies had multiple tools in the pipeline and one of the most painful parts of that for them was moving data between those applications and moving tools, it’s expensive to reauthor a tool for every single environment you’re going to be authoring. So that’s what we went out to solve, the question of how to build something that’s portable between applications, has all of the performance of well-written code, but also is accessible to your more standard technical artists.
We’re announcing and launching Fabric Engine 2. Over the 5 years our focus has been on the performance and the equation of working with high-end R&D teams and larger studios. We wanted to hit the accessible part of what I said before, that means introducing a visual programming system that we call Canvas on top of the main engine of Fabric. Now, even if an artist can’t write any code (most people are comfortable with node-based workflows) it allows for more interesting experimentation. That’s been the main push for us.
Currently, what we do each year at SIGGRAPH is one main user group session, which is where we spend 8 hours presenting different elements of Fabric Engine and customers come in and tell us how they used it. It’s difficult on the showfloor to have a good time to talk about a framework like Fabric Engine because people get a very cursory understanding, but then they come to the user group and they get to dig in a bit more.
We’re more on the content creation end of things. It’s more about loading tools, particularly in larger studios where they might have 3ds Max in one studio, Maya in another, and Houdini in different places. You want to be able to write a tool once and have it run everywhere.
We focus on the VFX side, but we have a strong games background in the team. With Fabric Engine 2, we see games companies jump in and start building things and useful tools within the content creation pipeline. I think it’s because it’s free for individuals and for studios we give an entry level model of 10 free licenses for free with no commercial limitations at all. In terms of building tools and sharing them with other people, there aren’t any limitations there.
The problems you have with the game engine is that you inherently destroy the data to optimize it for the best performance at runtime, but what you want to do when you’re authoring, is make changes and edits to that data. So ideally you want to pull it in native FBX, Alembic, or a custom format and make changes to those and push them back out.
I worked in the Autodesk Games Technology Group, and my experience there was that middleware is a tough place to be because you’re talking to engine directors that don’t want you to touch their game budget.
We do have a few games customers using us, but all of that’s in authoring end of the pipeline. Some of it’s about how to massage an asset so that you can hit a good runtime target. Currently we’re working with the Simply Gone guys to hook some of their tools in.
One thing we introduced at the show is an integration to Unreal Engine. This is where we hook Fabric into Blueprint and push things in. So if you have something like Alembic as an asset representation we can push that into Unreal Engine. That’s not for compiled execution on a console though, it’s more for running on a PC for VR type experiences.
My interest in runtimes is more in customizing Fabric to push out different dates of representations and that’s something you can give to the engine team and tell them that’s what they can ingest.
However, on the art side, one of the biggest problems in game development is what the artist meant to do and what it looks like when it’s in the engine. The tools that can solve that are extremely valuable. For us, if we can help to retain the artistic intent and get something out that isn’t an abstract representation, but it’s more of let’s say, pushing out C# as a representation, that’s something an engine team can ingest and where you can get a reliable result.
That’s what I was trying to solve 7, 8 years ago at Autodesk. How do I author something here and have it look how I meant it to look in runtime. That’s still hasn’t been fixed.
Future of Fabric
There’s a few things, there’s what we’re doing with what we’re doing within studios as Fabric as a data transport layer through a complex pipeline. Then there’s the Canvas side of things. As people start authoring more tools with that and sharing those, we’re going to introduce that body developer platform so that people can actually give away or sell their own Fabric plugin.
So think of Fabric as a universal plugin. You can think of it as a universal plugin, this would allow people to use that mechanism to build that tool and they could deploy it to all the platforms that we support.
VR and AR
We’ll see what happens with VR and AR. The biggest problem around that is where the tools are for producing high-end content for those experiences. It’s kind of more linear content when you’re doing a VR experience. It’s not a game so it’s not all emergent, scripted behavior.
When we talk to people making those experiences, they’ll tell us that they have a high-end director who wants to make a VR experience but he doesn’t have any of the camera tools he expects to have. Where do those come from?
If I can have someone with a headmounted display, and some human interface before them, then you can start doing set building and level editing within the experience. I think that’s the thing right now. People are uploading an experience on a screen in the traditional model, but then the playback is in a first person and very different experience. To build those well, I think you have to be in that rich environment.