Semantically Contrastive Learning for Low-light Image Enhancement
Low-light image enhancement (LLE) remains challenging
due to the unfavorable prevailing low-contrast and weakvisibility problems of single RGB images. In this paper, we
respond to the intriguing learning-related question – if leveraging both accessible unpaired over/underexposed images
and high-level semantic guidance, can improve the performance of cutting-edge LLE models? Here, we propose an effective semantically contrastive learning paradigm for LLE
(namely SCL-LLE). Beyond the existing LLE wisdom, it
casts the image enhancement task as multi-task joint learning,
where LLE is converted into three constraints of contrastive
learning, semantic brightness consistency, and feature preservation for simultaneously ensuring the exposure, texture, and
color consistency. SCL-LLE allows the LLE model to learn
from unpaired positives (normal-light)/negatives (over/underexposed), and enables it to interact with the scene semantics to regularize the image enhancement network, yet the
interaction of high-level semantic knowledge and the lowlevel signal prior is seldom investigated in previous methods. Training on readily available open data, extensive experiments demonstrate that our method surpasses the state-of-thearts LLE models over six independent cross-scenes datasets.
Moreover, SCL-LLE’s potential to beneft the downstream semantic segmentation under extremely dark conditions is discussed. Source Code: https://github.com/LingLIx/SCL-LLE.
Author affiliationSchool of Computing and Mathematical Sciences, University of Leicester
SourceThirty-Sixth AAAI Conference on Artificial Intelligence (AAAI-22), February 22 - March 1, 2022.
- AM (Accepted Manuscript)