image: FiLM-Scope simultaneously captures 48 multiperspective images of a surgical scene. Using a custom reconstruction algorithm, these images can be converted into a dense 3D model.
Credit: Clare B. Cook (Duke University).
For over a century, surgeons performing delicate procedures have relied on stereoscopic microscopes to gain a sense of depth. These tools mimic human vision by presenting slightly different images to each eye, allowing the brain to perceive three-dimensional structures—a crucial aid when working with fragile blood vessels or intricate brain tissue. Despite modern upgrades like digital displays and video capture, today’s operating microscopes still depend on the same core principle: two views, interpreted by the human brain.
But this approach has its limits. Although it provides good intuitive depth perception, it doesn’t allow surgeons—or surgical robots—to extract precise measurements from what they see. Estimating exact distances or shapes using just two images is difficult, especially in complex surgical environments where lighting is uneven, surfaces reflect harshly, and tools may block the view. These challenges have slowed progress in surgical automation and real-time feedback tools.
Pre-operative 3D scans, like MRIs or CTs, can help, but they don’t update during surgery as tissues shift or change. Optical coherence tomography (OCT) offers detailed real-time data, but it covers a small area and produces black-and-white images that can be hard to interpret. To meet the need for better 3D imaging that works during live surgery, researchers recently developed a new kind of surgical microscope called the Fourier lightfield multiview stereoscope, known as “FiLM-Scope.” Their report is published in Advanced Photonics Nexus.
FiLM-Scope simultaneously deploys 48 tiny cameras arranged in a grid, all focused through a single high-throughput lens. Each camera captures the surgical field from a slightly different angle, producing 48 high-resolution images (each 12.5 megapixels) of the same scene. The field of view is large—about 28 by 37 millimeters—with fine detail down to 22 microns. It can also stream video at up to 120 frames per second.
These multiple perspectives are processed by a specially designed algorithm that creates a detailed 3D map of the scene in real time. The algorithm is self-supervised, meaning it doesn’t need pre-existing data or models to work. It can reconstruct surface shapes with a precision of 11 microns over a depth range of one centimeter. Because each frame captures the full scene from many angles, users can digitally zoom or shift the view without moving the microscope—making surgery smoother and more efficient.
By turning standard images into precise 3D measurements, the FiLM-Scope could expand what’s possible in both manual and robotic microsurgery. Its flexible, data-rich imaging could also be valuable in other fields that depend on high-accuracy 3D visualization, from materials science to microfabrication.
For details see the original Gold Open Access article by C.B. Cook et al., “Fourier lightfield multiview stereoscope for large field-of-view 3D imaging in microsurgical settings,” Adv. Photon. Nexus 4(4), 046008, doi: 10.1117/1.APN.4.4.046008.
Journal
Advanced Photonics Nexus
Article Title
Fourier lightfield multiview stereoscope for large field-of-view 3D imaging in microsurgical settings
Article Publication Date
30-Jun-2025