DehazeGS: Seeing Through Fog with 3D Gaussian Splatting

Jinze Yu1, Yiqun Wang1, Zhengda Lu2, Jianwei Guo3, Yong Li1, Hongxing Qin1, Xiaopeng Zhang4
1College of Computer Science, Chongqing University 2School of Artificial Intelligence, University of Chinese Academy of Sciences 3School of Artificial Intelligence, Beijing Normal University 4MAIS, Institute of Automation, Chinese Academy of Sciences
Overview.

DehazeGS, a method capable of decomposing and rendering a fog-free background from participating media using only muti-view foggy images as input.

Abstract

Current novel view synthesis tasks primarily rely on high-quality and clear images. However, in foggy scenes, scattering and attenuation can sig- nificantly degrade the reconstruction and render- ing quality. Although NeRF-based dehazing re- construction algorithms have been developed, their use of deep fully connected neural networks and per-ray sampling strategies leads to high computa- tional costs. Moreover, NeRF’s implicit represen- tation struggles to recover fine details from hazy scenes. In contrast, recent advancements in 3D Gaussian Splatting achieve high-quality 3D scene reconstruction by explicitly modeling point clouds into 3D Gaussians. In this paper, we propose lever- aging the explicit Gaussian representation to ex- plain the foggy image formation process through a physically accurate forward rendering process. We introduce DehazeGS, a method capable of decom- posing and rendering a fog-free background from participating media using only muti-view foggy images as input. We model the transmission within each Gaussian distribution to simulate the forma- tion of fog. During this process, we jointly learn the atmospheric light and scattering coefficient while optimizing the Gaussian representation of the hazy scene. In the inference stage, we eliminate the ef- fects of scattering and attenuation on the Gaussians and directly project them onto a 2D plane to ob- tain a clear view. Experiments on both synthetic and real-world foggy datasets demonstrate that De- hazeGS achieves state-of-the-art performance in terms of both rendering quality and computational efficiency.

Qualitative comparison of novel view synthesis results on real foggy datasets

Ours
3D-GS
Ours
3D-GS
Ours
3D-GS
Ours
3D-GS

Qualitative comparison of novel view synthesis results on synthetic foggy datasets

Ours
3D-GS
Ours
3D-GS
Ours
3D-GS
Ours
3D-GS

BibTeX


      @article{yu2025dehazegs,
        title={DehazeGS: Seeing Through Fog with 3D Gaussian Splatting},
        author={Yu, Jinze and Wang, Yiqun and Lu, Zhengda and Guo, Jianwei and Li, Yong and Qin, Hongxing and Zhang, Xiaopeng},
        journal={arXiv preprint arXiv:2501.03659},
        year={2025}
      }