Have a look at an incredibly powerful approach proposed by the Facebook research division.
Researchers at Facebook Reality Labs have recently shared more details on DeepFocus, a novel rendering system originally revealed in December 2018. The system uses AI to generate ultra-realistic visuals in varifocal headsets. The team is getting ready to present the next version of the system which can help create future high-fidelity displays for VR.
The system is described in a SIGGRAPH technical paper, “Neural Supersampling for Real-time Rendering.” It is based on a machine learning approach that turns low-resolution input images into high-resolution outputs for real-time rendering. "This upsampling process uses neural networks, training on the scene statistics, to restore sharp details while saving the computational overhead of rendering these details directly in real-time applications," wrote the team.
The team states that this new approach achieves |16x supersampling of rendered content with high spatial and temporal fidelity."
You can learn more here.