The actors train for 10 in-game years.
NVIDIA presented a curious approach for generating diverse and directable behaviors for interactive characters called Conditional Adversarial Latent Models (CALM).
CALM used imitation learning to teach itself movements that "capture the complexity and diversity of human motion" and enable direct control over character movements. The crazy part is that it is trained on a single A100 GPU for 5 billion steps in 10 in-simulation years (which is 10 real days, but still.) Once trained, the character can be controlled in-game.
The actors can be trained on motion capture data, which is then encoded. Then comes precision training where the model learns to listen to controls. Finally, the character can follow intuitive commands. For example, it can't just go from sprinting to crouching, it needs to slow down, bend its legs, and gradually come to a stop.
This research is by no means perfect yet, but it is a great step toward self-learning AI that can be controlled in games without long lines of code.
Learn more about it here and join our 80 Level Talent platform and our Telegram channel, follow us on Threads, Instagram, Twitter, and LinkedIn, where we share breakdowns, the latest news, awesome artworks, and more.
Keep reading
You may find these articles interesting