Task-customized mixture of adapters for general image fusion

P Zhu, Y Sun, B Cao, Q Hu - … of the IEEE/CVF conference on …, 2024 - openaccess.thecvf.com
P Zhu, Y Sun, B Cao, Q Hu
Proceedings of the IEEE/CVF conference on computer vision and …, 2024openaccess.thecvf.com
General image fusion aims at integrating important information from multi-source images.
However due to the significant cross-task gap the respective fusion mechanism varies
considerably in practice resulting in limited performance across subtasks. To handle this
problem we propose a novel task-customized mixture of adapters (TC-MoA) for general
image fusion adaptively prompting various fusion tasks in a unified model. We borrow the
insight from the mixture of experts (MoE) taking the experts as efficient tuning adapters to …
Abstract
General image fusion aims at integrating important information from multi-source images. However due to the significant cross-task gap the respective fusion mechanism varies considerably in practice resulting in limited performance across subtasks. To handle this problem we propose a novel task-customized mixture of adapters (TC-MoA) for general image fusion adaptively prompting various fusion tasks in a unified model. We borrow the insight from the mixture of experts (MoE) taking the experts as efficient tuning adapters to prompt a pre-trained foundation model. These adapters are shared across different tasks and constrained by mutual information regularization ensuring compatibility with different tasks while complementarity for multi-source images. The task-specific routing networks customize these adapters to extract task-specific information from different sources with dynamic dominant intensity performing adaptive visual feature prompt fusion. Notably our TC-MoA controls the dominant intensity bias for different fusion tasks successfully unifying multiple fusion tasks in a single model. Extensive experiments show that TC-MoA outperforms the competing approaches in learning commonalities while retaining compatibility for general image fusion (multi-modal multi-exposure and multi-focus) and also demonstrating striking controllability on more generalization experiments. The code is available at https://github. com/YangSun22/TC-MoA.
openaccess.thecvf.com
Showing the best result for this search. See all results