SFU researchers develop a new tool that brings blender-like lighting control to any photograph
Simon Fraser UniversityPeer-Reviewed Publication
Lighting plays a crucial role when it comes to visual storytelling. Whether it’s film or photography, creators spend countless hours, and often significant budgets, crafting the perfect illumination for their shot. But once a photograph or video is captured, the illumination is essentially fixed. Adjusting it afterward, a task called “relighting,” typically demands time-consuming manual work by skilled artists.
While some generative AI tools attempt to tackle this task, they rely on large-scale neural networks and billions of training images to guess how light might interact with a scene. But the process is often a black box; users can’t control the lighting directly or understand how the result was generated, often leading to unpredictable outputs that can stray from the original content of the scene. Getting the result one envisions often requires prompt engineering and trial-and-error, hindering the creative vision of the user.
In a new paper to be presented at this year's SIGGRAPH conference in Vancouver, researchers in the Computational Photography Lab at SFU offer a different approach to relighting. Their work, “Physically Controllable Relighting of Photographs”, brings explicit control over lights, typically available in Computer Graphics software such as Blender or Unreal Engine, to image and photo editing.