Tracking Everything Everywhere All at Once.
In their recent paper called Tracking Everything Everywhere All at Once, researchers from Google, Cornell University, and UC Berkeley presented a "new test-time optimization method for estimating dense and long-range motion from a video sequence."
Video tracking is still difficult to implement reliably, so the researchers propose OmniMotion, their motion representation that allows for accurate, full-length motion estimation of every pixel in a video.
"OmniMotion represents a video using a quasi-3D canonical volume and performs pixel-wise tracking via bijections between local and canonical space. This representation allows us to ensure global consistency, track through occlusions, and model any combination of camera and object motion."
The authors also say they can extract a pseudo-depth visualization.
While the method outperforms other models, according to the researchers, it still has room for improvement. Right now, it struggles with rapid and highly non-rigid motion as well as thin structures.
You can check how well OmniMotion tracks motion in a demo here. Also, don't forget to join our 80 Level Talent platform and our Telegram channel, follow us on Instagram and Twitter, where we share breakdowns, the latest news, awesome artworks, and more.