Relightable Gaussian Codec Avatars produce high-quality 3D heads.
Researchers from Meta have presented Relightable Gaussian Codec Avatars, a method for creating high-quality relightable head avatars that can be animated to generate new expressions.
This approach uses a geometry model based on 3D Gaussians, which can capture 3D-consistent extremely fine details such as hair strands and pores during facial movements. The researchers also showcased a new relightable appearance model based on learnable radiance transfer, which supports diverse materials of human heads such as the eyes, skin, and hair in a unified manner.
"Together with global illumination-aware spherical harmonics for the diffuse components, we achieve real-time relighting with all-frequency reflections using spherical Gaussians. This appearance model can be efficiently relit in real-time under both point light and continuous illumination. We further improve the fidelity of eye reflections and enable explicit gaze control by introducing relightable explicit eye models."
Relightable Gaussian Codec Avatars can control the expression, gaze, view, and lighting. The geometry is parameterized by 3D Gaussians and can be rendered with the Gaussian Splatting technique, according to the authors.
If you want to know more about 3D Gaussians, I suggest reading about 3D Gaussian Splatting, a rendering technique that leverages 3D Gaussians to represent the scene, allowing you to synthesize 3D scenes out of 2D footage. Simply put, it takes samples of images and turns them into 3D scenes without creating meshes by converting a point cloud to Gaussians using machine learning. You can learn more about it here.
The creators claim that their method outperforms existing approaches without compromising real-time performance. It can also be rendered in real-time from any viewpoint of a VR headset.