Real-World Adaptation of Retinexformer for Low-Light Image Enhancement Using Unpaired Data

Authors

  • Subhan Uddin School of Information and Software Engineering, University of Electronic, Science and Technology of China (UESTC), Chengdu, China Author
  • Babar Hussain School of Information and Software Engineering, University of Electronic, Science and Technology of China (UESTC), Chengdu, China Author
  • Sidra Fareed School of Information and Software Engineering, University of Electronic, Science and Technology of China (UESTC), Chengdu, China Author
  • Aqsa Arif School of Information and Software Engineering, University of Electronic, Science and Technology of China (UESTC), Chengdu, China Author
  • Babar Ali School of Information and Software Engineering, University of Electronic, Science and Technology of China (UESTC), Chengdu, China Author

DOI:

https://doi.org/10.64229/6rrx1t06

Keywords:

Low-light Enhancement, Retinexformer, Unpaired Data, Image Enhancement, Computer Vision

Abstract

Low-light image enhancement remains a significant challenge in real-world computer vision applications, especially where lighting conditions vary drastically and paired training data is unavailable. While transformer-based models have shown promise in controlled environments, their ef-fectiveness often diminishes when applied to naturally degraded images. This paper presents a novel approach for adapting a transformer-based enhancement model to realworld low-light scenarios using unpaired datasets. We utilize real low-light images captured under uncontrolled conditions and propose a domain adaptation framework that enables effective transfer learning from synthetic to real domains. Our method integrates unsupervised reconstruction loss, perceptual optimization, and domain-invariant feature alignment to refine the model’s performance without requiring paired supervision. Experimental evaluations reveal notable improvements in both visual quality and quantitative metrics on real-world benchmarks. Compared to existing enhancement methods, our approach offers superior generalization, robustness to noise, and high-fidelity out-put. This demonstrates the potential of our domain-adapted transformer model in practical low-light imaging applications, including night photography, surveillance, and mobile vision systems.

Author Biographies

  • Babar Hussain, School of Information and Software Engineering, University of Electronic, Science and Technology of China (UESTC), Chengdu, China

    Software Engineering

  • Sidra Fareed, School of Information and Software Engineering, University of Electronic, Science and Technology of China (UESTC), Chengdu, China

    Software Engineering

  • Aqsa Arif, School of Information and Software Engineering, University of Electronic, Science and Technology of China (UESTC), Chengdu, China

    Software Engineering

References

[1]Yuen Peng Loh and Chee Seng Chan. Exdark: A benchmark dataset for recognition under extreme lowlight conditions. In IEEE International Conference on Image Processing (ICIP), 2019.

[2]Chen Wei, Wenqi Wang, Wenhan Chen, Yue Cao, Stephen Lin, Yu Qiao, and Jifeng Dai. Retinexformer: One-stage retinex-based transformer for low-light image enhancement. In CVPR, 2023.

[3]Chen Chen, Qifeng Chen, Jia Xu, and Vladlen Koltun. Learning to see in the dark. ACM Transactions on Graphics (TOG), 37(4):1–10, 2018.

[4]Chen Chen, Qifeng Chen, Jia Xu, and Vladlen Koltun. See-in-the-dark: Learning from raw camera data to see in the dark. CVPR, 2018.

[5]Edwin H. Land. The retinex theory of color vision.Scientific American, 237(6):108–128, 1977.

[6]Xuan Guo, Yu Li, and Haibin Ling. Lime: Lowlight image enhancement via illumination map estimation. In IEEE Transactions on Image Processing, volume 26, pages 982–993, 2017.

[7]Kin Gwn Lore, Adebayo Akintayo, and Soumik Sarkar. Llnet: A deep autoencoder approach to natural low-light image enhancement. In Pattern Recognition, 2017.

[8]Chen Chen, Qifeng Chen, Jia Xu, and Vladlen Koltun. Learning to see in the dark. In CVPR, 2018.

[9]Chen Wei, Wenqi Ren Wang, Wenhan Yang, Xiaochun Liu, and Yinqiang Guo. Deep retinex decomposition for low-light enhancement. In BMVC, 2018.

[10]Zhongyuan Li, Chun Xu, Cheng Guo, Yuchao Zhang, Qinghua Hu, and Jun Wu. Uretinex-net: Retinexbased deep unfolding network for low-light image enhancement. CVPR, 2021.

[11]Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale. In ICLR, 2021.

[12]Babar Hussain, Jiandong Guo, Sidra Fareed, and Subhan Uddin. Robotics for space exploration: From mars rovers to lunar missions. International Journal of Ethical AI Application, 1(1):1–10, 2025.

Downloads

Published

2025-07-18

Issue

Section

Articles