TY - GEN
T1 - Reinforced depth-aware deep learning for single image dehazing
AU - Guo, Tiantong
AU - Monga, Vishal
N1 - Funding Information:
This work is supported by an NSF CAREER Award to V. Monga.
Publisher Copyright:
© 2020 IEEE
PY - 2020/5
Y1 - 2020/5
N2 - Image dehazing continues to be one of the most challenging inverse problems. Deep learning methods have emerged to complement traditional model-based methods and have helped define a new state of the art in achievable dehazed image quality. However, most deep learning-based methods usually design a regression network as a black-box tool to either estimate the dehazed image and/or the physical parameters in the haze model, i.e. ambient light (A) and transmission map (t). The inverse haze model may then be used to estimate the dehazed image. In this work, we proposed a Depth-aware Dehazing using Reinforcement Learning system, denoted as DDRL. DDRL generates the dehazed image in a near-to-far progressive manner by utilizing the depth-information from the scene. This contrasts with the most recent learning-based methods that estimate these parameters in one pass. In particular, DDRL exploits the fact that the haze is less dense near the camera and gets increasingly denser as the scene moves farther away from the camera. DDRL consists of a policy network and a dehazing (regression) network. The policy network estimates the current depth for the dehazing network to use. A novel policy regularization term is introduced for the policy network to generate the policy sequence following the near-to-far order. Based on extensive tests over three benchmark test sets, DDRL demonstrates vastly enhanced dehazing results, particularly when training is limited.
AB - Image dehazing continues to be one of the most challenging inverse problems. Deep learning methods have emerged to complement traditional model-based methods and have helped define a new state of the art in achievable dehazed image quality. However, most deep learning-based methods usually design a regression network as a black-box tool to either estimate the dehazed image and/or the physical parameters in the haze model, i.e. ambient light (A) and transmission map (t). The inverse haze model may then be used to estimate the dehazed image. In this work, we proposed a Depth-aware Dehazing using Reinforcement Learning system, denoted as DDRL. DDRL generates the dehazed image in a near-to-far progressive manner by utilizing the depth-information from the scene. This contrasts with the most recent learning-based methods that estimate these parameters in one pass. In particular, DDRL exploits the fact that the haze is less dense near the camera and gets increasingly denser as the scene moves farther away from the camera. DDRL consists of a policy network and a dehazing (regression) network. The policy network estimates the current depth for the dehazing network to use. A novel policy regularization term is introduced for the policy network to generate the policy sequence following the near-to-far order. Based on extensive tests over three benchmark test sets, DDRL demonstrates vastly enhanced dehazing results, particularly when training is limited.
UR - http://www.scopus.com/inward/record.url?scp=85091180035&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85091180035&partnerID=8YFLogxK
U2 - 10.1109/ICASSP40776.2020.9054504
DO - 10.1109/ICASSP40776.2020.9054504
M3 - Conference contribution
AN - SCOPUS:85091180035
T3 - ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings
SP - 8891
EP - 8895
BT - 2020 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2020 - Proceedings
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2020 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2020
Y2 - 4 May 2020 through 8 May 2020
ER -