logo80lv
Articlesclick_arrow
Research
Talentsclick_arrow
Events
Workshops
Aboutclick_arrow
profile_loginLogIn

Google's Researchers Present a New Method For Interacting With Objects in Images

This generative model can turn a still photo into a seamless looping video or an interactive picture.

Developers Zhengqi Li, Richard Tucker, Noah Snavely, and Aleksander Holynski from the Google Research team have unveiled Generative Image Dynamics, a new method that models a prior in the image space for scene dynamics, enabling it to transform a single image into a seamless looping video or an interactive dynamic scene.

According to the team, they trained the prior using a dataset of motion trajectories extracted from real-life video sequences that featured natural, oscillating motions like those seen in trees, flowers, candles, and wind-blown clothing. These trajectories can then be applied to convert static images into smooth-looping dynamic videos, slow-motion clips, or interactive experiences that allow users to interact with the elements within the image.

"Given a single image, our trained model uses a frequency-coordinated diffusion sampling process to predict a per-pixel long-term motion representation in the Fourier domain, which we call a neural stochastic motion texture," commented the team. "This representation can be converted into dense motion trajectories that span an entire video. Along with an image-based rendering module, these trajectories can be used for a number of downstream applications, such as turning still images into seamlessly looping dynamic videos, or allowing users to realistically interact with objects in real pictures."

Click here to learn more about Generative Image Dynamics and try out its interactive image capabilities. Also, don't forget to join our 80 Level Talent platform and our Telegram channel, follow us on InstagramTwitter, and LinkedIn, where we share breakdowns, the latest news, awesome artworks, and more.

Join discussion

Comments 0

    You might also like

    We need your consent

    We use cookies on this website to make your browsing experience better. By using the site you agree to our use of cookies.Learn more