Rendering High-Fidelity Animatable 3D Faces in Real-Time

A team of scientists from the University of California, San Diego and Facebook Reality Labs presented a novel way to render faces with lighting environments in real-time.

Sai Bi, Stephen Lombardi, Shunsuke Saito, Tomas Simon, Shih-En Wei, Kevyn McPhail, Ravi Ramamoorthi, Yaser Sheikh, and Jason Saragih from UCSD and Facebook Reality Labs presented a method for building high-fidelity animatable 3D face models that can be posed and rendered with novel lighting environments in real-time. The team combined strengths of both relightable models, which can generalize to natural illumination conditions but are computationally expensive to render, and efficient, high-fidelity face models, that do not generalize to novel lighting conditions. In the end, the team created a "...novel approach for generating dynamic relightable faces that exceeds state-of-the-art performance."

"Our method is capable of capturing subtle lighting effects and can even generate compelling near-field relighting despite being trained exclusively with far-field lighting data," comments the team.

Learn more by visiting the project's webpage. Also, don't forget to join our new Telegram channel, our Discord, follow us on Instagram and Twitter, where we are sharing breakdowns, the latest news, awesome artworks, and more.

Join discussion

Comments 0

    You might also like

    We need your consent

    We use cookies on this website to make your browsing experience better. By using the site you agree to our use of cookies.Learn more