A team of scientists from the University of California, San Diego and Facebook Reality Labs presented a novel way to render faces with lighting environments in real-time.
Sai Bi, Stephen Lombardi, Shunsuke Saito, Tomas Simon, Shih-En Wei, Kevyn McPhail, Ravi Ramamoorthi, Yaser Sheikh, and Jason Saragih from UCSD and Facebook Reality Labs presented a method for building high-fidelity animatable 3D face models that can be posed and rendered with novel lighting environments in real-time. The team combined strengths of both relightable models, which can generalize to natural illumination conditions but are computationally expensive to render, and efficient, high-fidelity face models, that do not generalize to novel lighting conditions. In the end, the team created a "...novel approach for generating dynamic relightable faces that exceeds state-of-the-art performance."
"Our method is capable of capturing subtle lighting effects and can even generate compelling near-field relighting despite being trained exclusively with far-field lighting data," comments the team.