News Release

New AI image-enhancement method could help transportation systems see more clearly in tunnels

Peer-Reviewed Publication

Beijing Institute of Technology Press Co., Ltd

Dynamic range compression dual-domain attention network for tunnel extreme exposure image enhancement in transportation visual systems

image: 

Dynamic range compression dual-domain attention network for tunnel extreme exposure image enhancement in transportation visual systems

view more 

Credit: GREEN ENERGY AND INTELLIGENT TRANSPORTATION

Researchers have developed a dynamic range compression dual-domain attention network for enhancing tunnel images under extreme exposure conditions, a problem that continues to challenge transportation visual systems used in autonomous driving, traffic monitoring, and other machine-vision tasks. The new method is designed to address the so-called "black hole" and "white hole" effects that occur when dramatic brightness differences between tunnel entrances and interiors overwhelm image acquisition systems and degrade the quality of visual information.

As road transportation infrastructure continues to expand, tunnel scenarios have become more common and more important to transportation safety. Yet tunnels remain one of the hardest visual environments for cameras and computer vision systems. At the entrance of a tunnel, exterior brightness can be far higher than the illumination inside, creating the black-hole effect in which the interior appears excessively dark. At the exit or in other strongly contrasted conditions, the opposite white-hole effect can also occur. For human drivers these transitions can already be difficult. For transportation visual systems, including machine perception pipelines, they can seriously compromise the capture of lane markings, vehicle contours, traffic signs, and other information that is essential for reliable interpretation of the road scene.

The new study proposes a model called DRC-DFANet to tackle this issue in real time and with high precision. Instead of relying on generic enhancement alone, the architecture is built around the specific challenge of tunnel extreme exposure. According to the article, the network integrates two core components: a dynamic frequency-domain attention module, or DFAM, and a spatial self-calibrated convolution module, or SCConv. Together, these modules are intended to optimize both global illumination coordination and local detail restoration, which is important because tunnel image enhancement is not just about brightening a dark frame. It requires recovering useful visual structure without destroying contrast or introducing new artifacts.

A key idea in the method is the separation of image information into frequency components. The DFAM uses wavelet transform to decouple features into low-frequency and high-frequency parts, allowing the system to treat global illumination and local texture differently. In practical terms, that means the network can adjust brightness more intelligently while preserving fine details that are important for transportation perception tasks. The paper highlights that the module helps mitigate exposure artifacts while maintaining image information, rather than simply flattening the scene into a visually brighter but less informative output.

The SCConv module complements this by focusing on local contrast and feature calibration. According to the abstract, it establishes interdependencies between channel and spatial dimensions so that local image regions can be adaptively corrected. This matters in tunnel scenes because useful traffic information often appears in small but safety-critical structures, such as lane boundaries, sign edges, and vehicle outlines. If a model improves overall brightness but blurs or suppresses these details, its practical value to transportation systems is limited. The dual-domain design in DRC-DFANet therefore reflects a broader engineering principle: transportation vision enhancement must support downstream perception, not just produce visually pleasing images.

Experimental results reported in the paper suggest that the method performs strongly against state-of-the-art alternatives. On benchmark tunnel datasets, DRC-DFANet achieved peak signal-to-noise ratio improvements of up to 8.8% while also improving high-frequency energy ratio, subband correlation, and exposure error metrics. These gains are important because they indicate improvement across both fidelity and exposure-related criteria. The qualitative results are equally relevant. The authors report that the model more effectively preserves vehicle contours, lane markings, and traffic signs while mitigating black-hole and white-hole effects. For transportation applications, preserving those scene elements can be more meaningful than a single headline image-quality metric.

The study also goes beyond narrow benchmark optimization by testing transferability and target detection performance in related transportation scenarios. This is a valuable step because image enhancement models often perform well only on the specific datasets they were tuned for. In real deployment, however, a method may need to operate across different tunnels, cameras, weather conditions, and road contexts. The reported transferability suggests that DRC-DFANet may have broader practical value in transportation visual systems, especially where image enhancement is only one stage in a larger perception pipeline.

Further work will still be needed to understand how the model performs under wider environmental variation, different sensor platforms, and direct deployment in production systems. Even so, the study offers a strong indication that tunnel vision enhancement can be improved by combining dynamic range compression with coordinated frequency-domain and spatial-domain attention. As transportation systems become increasingly automated and data-driven, methods that help cameras see more clearly under extreme tunnel lighting may play an important role in improving both perception reliability and downstream traffic safety.

Reference

Author:

Bu Xu a, Jingyi Tang b, Jue Li b, Shuai Zhou a, Chen Liu c

Title of original paper:

Dynamic range compression dual-domain attention network for tunnel extreme exposure image enhancement in transportation visual systems

Article link:

https://www.sciencedirect.com/science/article/pii/S2773153725000878

Journal:

Green Energy and Intelligent Transportation

DOI:

10.1016/j.geits.2025.100337

Affiliations:

a School of Mechatronics and Vehicle Engineering, Chongqing Jiaotong University, Chongqing 400074, China

b College of Traffic & Transportation, Chongqing Jiaotong University, Chongqing 400074, China

c School of Civil Engineering, Chongqing Jiaotong University, Chongqing 400074, China


Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.