DehazeGS: Seeing Through Fog with 3D Gaussian Splatting

Jinze Yu1, Yiqun Wang1, Aiheng Jiang1, Zhengda Lu2, Jianwei Guo3,4, Yong Li1, Hongxing Qin1, Xiaopeng Zhang4
1College of Computer Science, Chongqing University 2School of Artificial Intelligence, University of Chinese Academy of Sciences 3School of Artificial Intelligence, Beijing Normal University 4MAIS, Institute of Automation, Chinese Academy of Sciences
Overview.

DehazeGS, a method capable of decomposing and rendering a fog-free background from participating media using only muti-view foggy images as input.

Abstract

Current novel view synthesis methods are typically designed for high-quality and clean input images. However, in foggy scenes, scattering and attenuation can significantly degrade the quality of rendering. Although NeRF-based dehazing approaches have been developed, their reliance on deep fully connected neural networks and per-ray sampling strategies leads to high computational costs. Furthermore, NeRF's implicit representation limits its ability to recover fine-grained details from hazy scenes. To overcome these limitations, we propose learning an explicit Gaussian representation to explain the formation mechanism of foggy images through a physically forward rendering process. Our method, DehazeGS, reconstructs and renders fog-free scenes using only multi-view foggy images as input. Specifically, based on the atmospheric scattering model, we simulate the formation of fog by establishing the transmission function directly onto Gaussian primitives via depth-to-transmission mapping. During training, we jointly learn the atmospheric light and scattering coefficients while optimizing the Gaussian representation of foggy scenes. At inference time, we remove the effects of scattering and attenuation in Gaussian distributions and directly render the scene to obtain dehazed views. Experiments on both real-world and synthetic foggy datasets demonstrate that DehazeGS achieves state-of-the-art performance.

Qualitative comparison of novel view synthesis results on real foggy datasets

Ours
3D-GS
Ours
3D-GS
Ours
3D-GS
Ours
3D-GS

Qualitative comparison of novel view synthesis results on synthetic foggy datasets

Ours
3D-GS
Ours
3D-GS
Ours
3D-GS
Ours
3D-GS

BibTeX


      @article{yu2025dehazegs,
        title={DehazeGS: Seeing Through Fog with 3D Gaussian Splatting},
        author={Yu, Jinze and Wang, Yiqun and Lu, Zhengda and Guo, Jianwei and Li, Yong and Qin, Hongxing and Zhang, Xiaopeng},
        journal={arXiv preprint arXiv:2501.03659},
        year={2025}
      }