hi!! Extremely intriguing exchange happy that I ran over such useful post. Keep doing awesome. Happy to be a piece of your net group.igloo counter top ice maker
Wow, the YouTube video was released in November. How have I never seen it before? I've probably watched it three times in the past hour. It's an absolutely amazing production. What was the budget for this?
NVIDIA is at the annual SIGGRAPH ACM computer graphics conference bringing the power of AI to computer graphics with a range of capabilities that will ease content creation, speed workflows and reduce costs for millions content creators.
During its press event, NVIDIA announced:
Supercharged rendering with OptiX 5.0’s new AI accelerated denoising running on NVIDIA DGX Station delivers rendering performance of 150 servers.
NVIDIA announced that it is bringing the power of artificial intelligence to rendering with the launch of NVIDIA OptiX™ 5.0 SDK with powerful new ray-tracing capabilities.
Running OptiX 5.0 on the NVIDIA DGX Station™ — the company’s recently introduced deskside AI workstation — will give designers, artists and other content-creation professionals the rendering capability of 150 standard CPU-based servers. This access to GPU-powered accelerated computing will provide extraordinary ability to iterate and innovate with speed and performance, at a fraction of the cost.
“Developers using our platform can enable millions of artists and designers to access the capabilities of a render farm right at their desk,” said Bob Pette, Vice President, Professional Visualization, NVIDIA. “By creating OptiX-based applications, they can bring the extraordinary power of AI to their customers, enhancing their creativity and dramatically improving productivity.”
OptiX 5.0’s new ray tracing capabilities will speed up the process required to visualize designs or characters, dramatically increasing a creative professional’s ability to interact with their content. It features new AI denoising capability to accelerate the removal of graininess from images, and brings GPU-accelerated motion blur for realistic animation effects.
OptiX 5.0 will be available at no cost to registered developers in November.
Rendering Appliance Powers AI Workflows
By running NVIDIA OptiX 5.0 on a DGX Station, content creators can significantly accelerate training, inference and rendering. A whisper-quiet system that fits under a desk, NVIDIA DGX Station uses the latest NVIDIA Volta-generation GPUs, making it the most powerful AI rendering system available.
To achieve equivalent rendering performance of a DGX Station, content creators would need access to a render farm with more than 150 servers that require some 200 kilowatts of power, compared with 1.5 kilowatts for a DGX Station. The cost for purchasing and operating that render farm would reach $4 million over three years compared with less than $75,000 for a DGX Station.
Industry Support for AI-based Graphics
NVIDIA is working with many of the world’s most important technology companies and creative visionaries from Hollywood studios to set the course for the use of AI for rendering, design, character generation and the creation of virtual worlds. They voiced broad support for the company’s latest innovations:
- “AI is transforming industries everywhere. We’re excited to see how NVIDIA’s new AI technologies will improve the filmmaking process.” — Steve May, Vice President and CTO, Pixar
- “We’re big fans of NVIDIA OptiX. It greatly reduced our development cost while porting the ray tracing core of our Clarisse renderer to NVIDIA GPUs and offers extremely fast performance. With the potential to significantly decrease rendering times with AI-accelerated denoising, OptiX 5 is very promising as it can become a game changer in production workflows.” — Nicolas Guiard, Principal Engineer, Isotropix
- “AI has the potential to turbocharge the creative process. We see a future where our artists’ creativity is unleashed with AI — a future where paintbrushes can truly ‘think’ and empower artists to create images and experiences we could hardly imagine just a few years ago. At Technicolor, we share NVIDIA’s vision to chart a path that enhances the toolset for creatives to deepen audience experiences.” — Sutha Kamal, Vice President, Technology Strategy, Technicolor
New Quadro and Titan xP External GPU solutions bring new creative power to millions of artists and designers.
NVIDIA announced that 25 million artists and designers can now easily upgrade the capability of their notebooks to support new workflows such as video editing, interactive rendering, VR content creation, AI development and more.
Creative professionals with underpowered graphics can now harness the power of NVIDIA TITAN X or NVIDIA Quadro® graphics cards through an external GPU (eGPU) chassis and dramatically boost the performance of their applications.
“While more computer power than ever is needed for VR, photoreal rendering and AI workflows, mobile systems are getting thinner and lighter, with limited performance and memory,” says Bob Pette, vice president, Professional Visualization, NVIDIA. “Our eGPUs can now solve this problem, enabling creatives to plug into our most capable GPUs so they can do their best work on the most graphically demanding applications.”
Quadro graphics will be available through qualified eGPU partners for those who use high-end content-creation applications for animation, color grading and rendering as well as CAD and simulation apps. The qualification process ensures users of compatibility, reliability and performance of their Quadro eGPU solution.
To ensure prosumers enjoy great performance with applications such as Autodesk Maya and Adobe® Premier Pro, NVIDIA is also releasing a new performance driver for TITAN X to make it faster than ever before.
Available starting in September, Quadro eGPU solutions will be available through qualified partners such as Bizon, One Stop Systems/Magma and Sonnet with more to come.
New simulators for AI in VR via an Isaac-trained robot in Holodeck.
NVIDIA Isaac is an AI-enabled robot that has been trained using a powerful simulation environment called the Isaac Lab.
Project Holodeck — a collaborative and physically accurate virtually reality environment — enables humans to enter a simulation and interact with robots in a VR environment the same way they will in real life.
You’ll be able to see how these two technologies work together by interacting with Isaac in two ways.
You’ll be able to go head-to-head with Isaac in the physical world on the show floor. And you’ll be able to strap on a VR headset, and enter a simulation via Project Holodeck.
Deep learning and computer vision have been combined to teach a robot to sense and respond to human presence, to identify the state of the play of the game, to understand the legal moves of the game, and to determine which tile to select and how to place it.
The key: a pair of neural networks that help Isaac not only understand the game, but understand how to put that understanding to work when interacting with humans.
New research in AI in the areas of facial animation, denoising, anti-aliasing and light transport.
The same GPUs that put games on your screen could soon be used to harness the power of AI to help game and film makers move faster, spend less and create richer experiences.
At SIGGRAPH 2017 this week, NVIDIA is showcasing research that makes it far easier to animate realistic human faces, simulate how light interacts with surfaces in a scene and render realistic images more quickly.
NVIDIA is combining our expertise in AI with our long history in computer graphics to advance 3D graphics for games, virtual reality, movies and product design.
Find more details here.
Also at SIGGRAPH, NVIDIA is at the center of the VR ecosystem. Find NVIDIA Research in the Emerging Tech area demonstrating their work in Optic and Haptics for AR and VR. They’re also showing the future of storytelling with ZCAM integration of VR Works 360 Video SDK.