Using AI to Transform Standard Video Into Slow Motion
Events
Subscribe:  iCal  |  Google Calendar
1, Jul — 1, Aug
San Diego US   19, Jul — 23, Jul
Torino IT   25, Jul — 29, Jul
Shanghai CN   3, Aug — 7, Aug
Vancouver CA   12, Aug — 17, Aug
Latest comments
by sekayee@gmail.com
4 hours ago

Thanks for sharing!

by ASatisfiedBean
9 hours ago

this is god damn amazing

As an E2 visa holder for many yrs...it seems a little silly to suggest this route. It is an investment visa usually in the range of 100K USD and over. I guess the premise is your parents would set you up in a business after graduation? But keep in mind that business also needs to be successful come renewal time.

Using AI to Transform Standard Video Into Slow Motion
19 June, 2018
News

Researchers from NVIDIA presented a deep learning-based system that can generate high-quality slow-motion videos using a 30-frame-per-second video, and the method is said to outperform various state-of-the-art techniques. The team will present the new approach at the annual Computer Vision and Pattern Recognition (CVPR) conference in Salt Lake City, Utah this week. 

Abstract

Given two consecutive frames, video interpolation aims at generating intermediate frame(s) to form both spatially and temporally coherent video sequences. While most existing methods focus on single-frame interpolation, we propose an end-to-end convolutional neural network for variable-length multi-frame video interpolation, where the motion interpretation and occlusion reasoning are jointly modeled. We start by computing bi-directional optical flow between the input images using a U-Net architecture. These flows are then linearly combined at each time step to approximate the intermediate bi-directional optical flows. These approximate flows, however, only work well in locally smooth regions and produce artifacts around motion boundaries. To address this shortcoming, we employ another U-Net to refine the approximated flow and also predict soft visibility maps. Finally, the two input images are warped and linearly fused to form each intermediate frame. By applying the visibility maps to the warped images before fusion, we exclude the contribution of occluded pixels to the interpolated intermediate frame to avoid artifacts. Since none of our learned network parameters are time-dependent, our approach is able to produce as many intermediate frames as needed. We use 1,132 video clips with 240-fps, containing 300K individual video frames, to train our network. Experimental results on several datasets, predicting different numbers of interpolated frames, demonstrate that our approach performs consistently better than existing methods.

You can find the full paper here

Using NVIDIA Tesla V100 GPUs and cuDNN-accelerated PyTorch deep learning framework the team trained their system on over 11,000 videos of everyday and sports activities shot at 240 frames-per-second. Once trained, the convolutional neural network predicted the extra frames.

The team used a separate dataset to validate the accuracy of their system.

The result can make videos shot at a lower frame rate look more fluid and less blurry.

To help demonstrate the research, the team took a series of clips from The Slow Mo Guys, a popular slow-motion based science and technology entertainment YouTube series created by Gavin Free, starring himself and his friend Daniel Gruchy, and made their videos even slower.

The method can take everyday videos of life’s most precious moments and slow them down to look like your favorite cinematic slow-motion scenes, adding suspense, emphasis, and anticipation.

NVIDIA 

You can get more details on the system here

Leave a Reply

Be the First to Comment!

avatar
wpDiscuz