logo80lv
Articlesclick_arrow
Talentsclick_arrow
Events
Workshops
Aboutclick_arrow
profile_login
Log in
0
Save
Copy Link
Share

NVIDIA Talks the Changes in Content Creation & AI Workflows

NVIDIA's Ashlee Martino-Tarr and Sabour Amirazodi shared how they joined the company and explained how the rendering and material creation workflows changed in recent years.

How I Got Into NVIDIA

Ashlee: Hi, I'm Ashlee Martino-Tarr. I am a workflow specialist at NVIDIA, and I'm here today at Adobe MAX to show people all the cool things they can do with generative AI.

Sabour: My name is Sabour Amirazodi, and I've been with NVIDIA since 2020!

Ashlee: I joined NVIDIA through my school. I went to the Academy of Art University, and I studied 3D animation and visual effects. From there, my portfolio was noticed by someone at NVIDIA, and I landed an interview with them. They wanted to hire me, but at the time, I had a non-compete contract, and I couldn't go and work with NVIDIA. When that contract was over though, I contacted NVIDIA, and they said yes.

I helped build materials for NVIDIA's Material Definition Language library, known as MDL. I also did a lot of rendering and surfacing tasks for them and worked on a ton of really interesting projects. Then 2020 came around, the pandemic came around, and generative AI came around. One day, I decided to test the newest thing coming out of Stability AI, Stable Diffusion. I was completely hooked. I started researching generative AI and everything happening in the AI space.

Since then, my role at NVIDIA has changed. I've become a workflow specialist, where I try to figure out how to take the most cutting-edge technology and tools and push them into existing pipelines.

Sabour: In 2017, I was contacted by Michael Steele, who worked in developer relations at NVIDIA. He asked if I wanted to present at Adobe MAX and show some of my work. I said yes immediately – I already used NVIDIA hardware and was excited. I gathered all my artwork and projects, loaded them onto a hard drive, and showed up with no specific demo.

Conventions like Adobe MAX made a huge impact on my career. This event has a special place in my heart. It's where I met creative professionals, industry experts, and learned so much. I learned Cinema 4D and Adobe software by attending sessions and talking to people.

AI in Art

Ashlee: The biggest shift is toward multi-modal models and MOE, mixture-of-experts models. These models can understand context from different inputs and generate different types of outputs. A short time ago, if you wanted to create an image, you needed one model. For videos, a different model. Today, we have world models and mixture-of-experts models that allow you to do more than one thing in the same model.

One really exciting area we’re exploring is using agents and agentic AI to help create images, stories, videos, and more. These models don’t just produce a single output; they can produce useful data that lets creators control and iterate on their designs.

An example I love is differential diffusion: the ability to extract lighting passes from any image. Diffuse, specular, shadow information. You can re-light a scene after the fact, warm some lights, remove others, and recompose the image with its integrity intact. This is incredibly powerful.

Sabour: Generative AI has also changed workflows. Tools like Generative Extend in Adobe Premiere solve problems like missing handles or transitions that aren’t long enough.

Advances in Rendering

Sabour: In my industry, I did a lot of visual effects and production work. Around that time, I had a project for Google where I had to create a full VR 180 experience. Rendering it was taking 12 or 13 minutes per frame. I had no way of finishing it on time. Other companies had already turned down the job, and I said yes before realising I had no way to render it fast enough.

Then Octane Render came out. It was fully GPU-accelerated. I discovered that with a high-end GPU, rendering was dramatically faster than CPU rendering. And I could scale it by adding multiple GPUs per machine. I ended up building a small render farm in my tiny studio. I brought render times down from 12 minutes per frame to one minute, and eventually to about 45 seconds. That was the difference between delivering the job and not delivering it.

I created visuals for stages and concerts. I did a big project for Pioneer DJ using LED walls and immersive content – roller coasters, spaceships, volcanoes, dinosaurs. It was fun, but it took a huge amount of time to render. GPU-accelerated engines like Octane made it possible. I used Cinema 4D and the entire Adobe suite to composite everything. As more tools became GPU-accelerated, I realized how much power GPUs had, and I hoped more software would take advantage of that.

Changes in the Material Production Process

Ashlee: Even before generative AI, Adobe used AI in Substance 3D Sampler for image-to-material generation. It could take a single photo and extract Displacement, Normal, and Roughness Maps using ray-tracing techniques. Those results are still extremely good today.

But the tools aren’t perfect. Many tasks in surfacing remain manual, repetitive, and tedious. AI is beginning to help with things like: cleaning edges, improving seamless tiling, auto-tile shuffling for large scenes, patching seams intelligently, and material blending over big environments. And I think the representation of materials will evolve. Beyond PBR, we’re seeing Gaussian Splatting, NeRFs, and new ways to capture material properties directly from images or scanned 3D files.

One area I really hope AI enters is the extremely technical, thankless parts of 3D: painting skin weights for rigs, retopology, UV unwrapping, projection cleanup. Imagine riggers focusing on IK systems instead of weight painting. Imagine never worrying about UVs again. There’s promising research happening already.

Ashlee Martino-Tarr, Workflow Specialist at NVIDIA

Sabour Amirazodi, Technical Marketing Engineer at NVIDIA

Ready to grow your game’s revenue?
Talk to us

Comments

0

arrow
Leave Comment
Ready to grow your game’s revenue?
Talk to us

You might also like

We need your consent

We use cookies on this website to make your browsing experience better. By using the site you agree to our use of cookies.Learn more