DDFNet-A: Attention-Based Dual-Branch Feature Decomposition Fusion Network for Infrared and Visible Image Fusion (2024)

All articles published by MDPI are made immediately available worldwide under an open access license. No special permission is required to reuse all or part of the article published by MDPI, including figures and tables. For articles published under an open access Creative Common CC BY license, any part of the article may be reused without permission provided that the original article is clearly cited. For more information, please refer to https://www.mdpi.com/openaccess.

Feature papers represent the most advanced research with significant potential for high impact in the field. A Feature Paper should be a substantial original Article that involves several techniques or approaches, provides an outlook for future research directions and describes possible research applications.

Feature papers are submitted upon individual invitation or recommendation by the scientific editors and must receive positive feedback from the reviewers.

Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world. Editors select a small number of articles recently published in the journal that they believe will be particularly interesting to readers, or important in the respective research area. The aim is to provide a snapshot of some of the most exciting work published in the various research areas of the journal.

Original Submission Date Received: .

  • Journals
      • Active Journals
      • Find a Journal
      • Proceedings Series
  • Topics
  • Information
      • For Authors
      • For Reviewers
      • For Editors
      • For Librarians
      • For Publishers
      • For Societies
      • For Conference Organizers
      • Open Access Policy
      • Institutional Open Access Program
      • Special Issues Guidelines
      • Editorial Process
      • Research and Publication Ethics
      • Article Processing Charges
      • Awards
      • Testimonials
  • Author Services
  • Initiatives
  • About
      • Overview
      • Contact
      • Careers
      • News
      • Press
      • Blog

Sign In / Sign Up Submit

Journals

Remote Sensing

Volume 16

Issue 10

10.3390/rs16101795

Submit to this Journal Review for this Journal Propose a Special Issue

► Article Menu

Article Menu

Article Views
Citations -
  • Table of Contents

announcement Help format_quote Cite question_answer Discuss in SciProfiles

thumb_up ... Endorse textsms ... Comment

Need Help?

Support

Find support for a specific problem in the support section of our website.

Get Support

Feedback

Please let us know what you think of our products and services.

Give Feedback

Information

Visit our dedicated information section to learn more about MDPI.

Get Information

clear

JSmol Viewer

clear

first_page

settings

Order Article Reprints

Font Type:

Arial Georgia Verdana

Font Size:

Aa Aa Aa

Line Spacing:

Column Width:

Background:

This is an early access version, the complete PDF, HTML, and XML versions will be available soon.

Article

by

Qiancheng Wei

1,

Ying Liu

1,*,

Xiaoping Jiang

1,

Ben Zhang

1,

Qiya Su

2 and

Muyao Yu

2

1

University of Chinese Academy of Sciences, Beijing 101408, China

2

Beijing Institute of Remote Sensing Equipment, Beijing 100005, China

*

Author to whom correspondence should be addressed.

Remote Sens. 2024, 16(10), 1795; https://doi.org/10.3390/rs16101795

Submission received: 21 March 2024 / Revised: 9 May 2024 / Accepted: 15 May 2024 / Published: 18 May 2024

(This article belongs to the Special Issue Remote Sensing: 15th Anniversary)

Download PDF

ReviewReports VersionsNotes

Abstract

The fusion of infrared and visible images aims to leverage the strengths of both modalities, thereby generating fused images with enhanced visible perception and discrimination capabilities. However, current image fusion methods frequently treat common features between modalities (modality-commonality) and unique features from each modality (modality-distinctiveness) equally during processing, neglecting their distinct characteristics. Therefore, we propose a DDFNet-A for infrared and visible image fusion. DDFNet-A addresses this limitation by decomposing infrared and visible input images into low-frequency features depicting modality-commonality and high-frequency features representing modality-distinctiveness. The extracted low and high features were then fused using distinct methods. In particular, we propose a hybrid attention block (HAB) to improve high-frequency feature extraction ability and a base feature fusion (BFF) module to enhance low-frequency feature fusion ability. Experiments were conducted on public infrared and visible image fusion datasets MSRS, TNO, and VIFB to validate the performance of the proposed network. DDFNet-A achieved competitive results on three datasets, with EN, MI, VIFF,

QAB/F

, FMI, and

Qs

metrics reaching the best performance on the TNO dataset, achieving 7.1217, 2.1620, 0.7739, 0.5426, 0.8129, and 0.9079, respectively. These values are

2.06%

,

11.95%

,

21.04%

,

21.52%

,

1.04%

, and

0.09%

higher than those of the second-best methods, respectively. The experimental results confirm that our DDFNet-A achieves better fusion performance than state-of-the-art (SOTA) methods.

Keywords: infrared image; visible image; image fusion; multi-modality; attention

Share and Cite

MDPI and ACS Style

Wei, Q.; Liu, Y.; Jiang, X.; Zhang, B.; Su, Q.; Yu, M. DDFNet-A: Attention-Based Dual-Branch Feature Decomposition Fusion Network for Infrared and Visible Image Fusion. Remote Sens. 2024, 16, 1795. https://doi.org/10.3390/rs16101795

AMA Style

Wei Q, Liu Y, Jiang X, Zhang B, Su Q, Yu M. DDFNet-A: Attention-Based Dual-Branch Feature Decomposition Fusion Network for Infrared and Visible Image Fusion. Remote Sensing. 2024; 16(10):1795. https://doi.org/10.3390/rs16101795

Chicago/Turabian Style

Wei, Qiancheng, Ying Liu, Xiaoping Jiang, Ben Zhang, Qiya Su, and Muyao Yu. 2024. "DDFNet-A: Attention-Based Dual-Branch Feature Decomposition Fusion Network for Infrared and Visible Image Fusion" Remote Sensing 16, no. 10: 1795. https://doi.org/10.3390/rs16101795

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Cite

Export citation file: BibTeX | EndNote | RIS

MDPI and ACS Style

Wei, Q.; Liu, Y.; Jiang, X.; Zhang, B.; Su, Q.; Yu, M. DDFNet-A: Attention-Based Dual-Branch Feature Decomposition Fusion Network for Infrared and Visible Image Fusion. Remote Sens. 2024, 16, 1795. https://doi.org/10.3390/rs16101795

AMA Style

Wei Q, Liu Y, Jiang X, Zhang B, Su Q, Yu M. DDFNet-A: Attention-Based Dual-Branch Feature Decomposition Fusion Network for Infrared and Visible Image Fusion. Remote Sensing. 2024; 16(10):1795. https://doi.org/10.3390/rs16101795

Chicago/Turabian Style

Wei, Qiancheng, Ying Liu, Xiaoping Jiang, Ben Zhang, Qiya Su, and Muyao Yu. 2024. "DDFNet-A: Attention-Based Dual-Branch Feature Decomposition Fusion Network for Infrared and Visible Image Fusion" Remote Sensing 16, no. 10: 1795. https://doi.org/10.3390/rs16101795

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

clear

Remote Sens., EISSN 2072-4292, Published by MDPI

RSS Content Alert

Further Information

Article Processing Charges Pay an Invoice Open Access Policy Contact MDPI Jobs at MDPI

Guidelines

For Authors For Reviewers For Editors For Librarians For Publishers For Societies For Conference Organizers

DDFNet-A: Attention-Based Dual-Branch Feature Decomposition Fusion Network for Infrared and Visible Image Fusion (11)

© 1996-2024 MDPI (Basel, Switzerland) unless otherwise stated

Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Terms and Conditions Privacy Policy

DDFNet-A: Attention-Based Dual-Branch Feature Decomposition Fusion Network for Infrared and Visible Image Fusion (2024)

References

Top Articles
Latest Posts
Article information

Author: Zonia Mosciski DO

Last Updated:

Views: 5385

Rating: 4 / 5 (71 voted)

Reviews: 94% of readers found this page helpful

Author information

Name: Zonia Mosciski DO

Birthday: 1996-05-16

Address: Suite 228 919 Deana Ford, Lake Meridithberg, NE 60017-4257

Phone: +2613987384138

Job: Chief Retail Officer

Hobby: Tai chi, Dowsing, Poi, Letterboxing, Watching movies, Video gaming, Singing

Introduction: My name is Zonia Mosciski DO, I am a enchanting, joyous, lovely, successful, hilarious, tender, outstanding person who loves writing and wants to share my knowledge and understanding with you.