Figure | Working principle of the single-view neural illumination framework for light field display. (IMAGE)
Caption
Figure | Working principle of the single-view neural illumination framework for light field display. The schematic configuration of the proposed end-to-end neural illumination estimation and editing framework. The system is designed to bridge real-world perception with high-fidelity display on a wearable device. The process begins with the Environmental Lighting, where the COP Module (Computational Optical Perception) captures sparse optical cues from a single observation view to infer a compact, parametric model of the scene's intrinsic illumination. Guided by these inferred parameters, the GLTS Module (Generative Light Transport Synthesis) utilizes a hybrid-guided generative network to create a photorealistic Synthesized Light Field. This virtual content is subsequently processed and transmitted via the Edge Computing Module to the display terminal. Finally, as illustrated in the XR Device Diagram, the light is optically modulated and coupled for projection to the user, ensuring the resulting virtual image maintains high photometric consistency with the real-world environment.
Credit
Cheng Wu et al.
Usage Restrictions
Credit must be given to the creator.
License
CC BY