logo80lv
Articlesclick_arrow
Research
Talentsclick_arrow
Events
Workshops
Aboutclick_arrow
profile_loginLogIn

Using AI for Real-Time 3D Face Reconstruction

Take a look at a work that proposes using a neural network to deal with facial reconstruction.

What is your take on facial alignment and real-time 3D face reconstruction? There are so many possibilities here, but are such things even possible today? Take a look at a work that proposes using a convolutional neural network to deal with the tasks. The method is said to be capable of generating high-quality outputs, creating each image in less than 10 milliseconds. What’s the magic here?

First, let’s check out a quick introduction from Two Minute Papers:

Here is an abstract: 

Joint 3D Face Reconstruction and Dense Alignment with Position Map Regression Network 

We propose a straightforward method that simultaneously reconstructs the 3D facial structure and provides dense alignment. To achieve this, we design a 2D representation called UV position map which records the 3D shape of a complete face in UV space, then train a simple Convolutional Neural Network to regress it from a single 2D image. We also integrate a weight mask into the loss function during training to improve the performance of the network. Our method does not rely on any prior face model and can reconstruct full facial geometry along with semantic meaning. Meanwhile, our network is very light-weighted and spends only 9.8ms to process an image, which is extremely faster than previous works. Experiments on multiple challenging datasets show that our method surpasses other state-of-the-art methods on both reconstruction and alignment tasks by a large margin.

Yao FengFan WuXiaohu ShaoYanfeng WangXi Zhou 

You can find the full paper here

Links: 

What are your thoughts on the proposed method? What do you think about its limitations? 

Join discussion

Comments 0

    You might also like

    We need your consent

    We use cookies on this website to make your browsing experience better. By using the site you agree to our use of cookies.Learn more