Artigo Revisado por pares

DMDN: Degradation model-based deep network for multi-focus image fusion

2021; Elsevier BV; Volume: 101; Linguagem: Inglês

10.1016/j.image.2021.116554

ISSN

1879-2677

Autores

Yifan Xiao, Zhixin Guo, Peter Veelaert, Wilfried Philips,

Tópico(s)

Image Enhancement Techniques

Resumo

Multi-focus image fusion (MFIF) is an efficient technique that merges differently focused images into one all-in-focus image, breaking the focus limitation of camera imaging. Most of the existing deep learning-based MFIF methods follow the fusion after focus measure strategy that divides the MFIF problem into focus map generation and image fusion. However, the generated focus map is usually coarse and the quality of the fused image greatly depends on post-processing. Different from the fusion after focus measure strategy, we propose a mathematical degradation model, where we assume the defocused images are degraded from a latent all-in-focus image, and the approximated all-in-focus image can be directly estimated from the degraded images. Thus we construct an end-to-end deep neural network to learn the mapping between the defocused images and the all-in-focus images. During training, the network inputs a pair of clear image and global blurred image in a random concatenate order, while the output fused image is supervised by the clear image with a Mix-loss comprising the pixel-wise loss and the perceptual loss. Our method is compared with the state-of-the-art algorithms on two MFIF datasets and our newly published dataset. Experimental results demonstrate the effectiveness of our method subjectively and objectively.

Referência(s)