This AI Can Capture Your Motion Without Cameras

Check out this AI that can transition you into a virtual world using 6 inertial sensors.

Xinyu Yi, Yuxiao Zhou, and Feng Xu from Tsinghua University have unveiled their new AI called TransPose, a DNN-based approach to perform full motion capture in 90 FPS. According to the demonstration video, this AI works in real-time and can transition your movements into a virtual world accurately and precisely, albeit with a small, less than half-a-second delay. The team has demonstrated how the AI understands the movements we perform on a daily basis, which showed the variety of types of motion that TransPose can understand.

The coolest part about this AI is that it doesn't require any cameras to capture motion. Instead, TransPose uses 6 Inertial Measurement Units (IMUs) that are mounted on the left and right forearms, the left and right lower leg, the head, and the pelvis. These IMUs work as gyroscopes to determine one's position and correctly translate it into a virtual world.

As a purely inertial sensor-based approach, the system does not suffer from occlusion, challenging environment (e.g. fixed cameras) or multi-person ambiguities, achieves long-range capture with real-time performance, and works in the dark.

"For body pose estimation, we propose a multi-stage network that estimates leaf-to-full joint positions as intermediate results. This design makes the pose estimation much easier, and thus achieves both better accuracy and lower computation cost," commented the team on the methods they used. "For global translation estimation, we propose a supporting-foot-based method and an RNN-based method to robustly solve for the global translations with a confidence-based fusion technique."

Learn more about TransPose here. Also, don't forget to join our new Reddit pageour new Telegram channel, follow us on Instagram and Twitter, where we are sharing breakdowns, the latest news, awesome artworks, and more.

Join discussion

Comments 0

    You might also like

    We need your consent

    We use cookies on this website to make your browsing experience better. By using the site you agree to our use of cookies.Learn more