logo80lv
Articlesclick_arrow
Research
Talentsclick_arrow
Events
Workshops
Aboutclick_arrow
profile_loginLogIn

NVIDIA Unveils a New Method for Synthesizing Character-Scene Interactions

The framework enables physically-simulated characters to perform scene interaction tasks in a natural and lifelike manner.

During the recently-concluded SIGGRAPH 2023 conference, a team of researchers from NVIDIA and the Max-Planck Institute introduced a novel method for synthesizing physical 3D character-scene interactions.

As outlined in the research paper shared by the team, the proposed framework leverages adversarial imitation learning and reinforcement learning to train physically-simulated characters that perform scene interaction tasks in a natural and lifelike manner.

The methodology involves learning scene interaction patterns from extensive, unstructured motion datasets, all achieved without the need for manual annotation of motion data. These scene interactions are learned using an adversarial discriminator, which evaluates the realism of a motion within the context of a scene. "The key novelty involves conditioning both the discriminator and the policy networks on scene context," comments the team.

"We demonstrate the effectiveness of our approach through three challenging scene interaction tasks: carrying, sitting, and lying down, which require coordination of a character's movements in relation to objects in the environment," reads the paper. "Our policies learn to seamlessly transition between different behaviors like idling, walking, and sitting."

YouTube channel Two Minute Papers has recently shared a six-minute video exploring the framework and explaining it in more detail. We highly recommend checking it out if you would like to learn more about NVIDIA's new method:

You can learn more and access the full paper here and watch the supplementary video over here.

Earlier this week, NVIDIA also released the source code for Neuralangelo – an AI model that turns 2D videos into 3D structures, "generating lifelike virtual replicas of buildings, sculptures, and other real-world objects."

Neuralangelo works based on instant neural graphics primitives, the technology behind NVIDIA's Instant NeRF, which turns 2D images into 3D models. The AI model selects several frames from a 2D video of an object or scene filmed from various angles. Once it's determined the camera position of each frame, it creates a rough 3D representation of the scene and optimizes the render to sharpen the details. The result is a 3D object or large-scale scene that can be used in other software.

Don't forget to join our 80 Level Talent platform and our Telegram channel, follow us on ThreadsInstagramTwitter, and LinkedIn, where we share breakdowns, the latest news, awesome artworks, and more.

Join discussion

Comments 0

    You might also like

    We need your consent

    We use cookies on this website to make your browsing experience better. By using the site you agree to our use of cookies.Learn more