Channel Attention Is All You Need for Video Frame Interpolation
2020; Association for the Advancement of Artificial Intelligence; Volume: 34; Issue: 07 Linguagem: Inglês
10.1609/aaai.v34i07.6693
ISSN2374-3468
AutoresMyungsub Choi, Heewon Kim, Bohyung Han, Ning Xu, Kyoung Mu Lee,
Tópico(s)Image Enhancement Techniques
ResumoPrevailing video frame interpolation techniques rely heavily on optical flow estimation and require additional model complexity and computational cost; it is also susceptible to error propagation in challenging scenarios with large motion and heavy occlusion. To alleviate the limitation, we propose a simple but effective deep neural network for video frame interpolation, which is end-to-end trainable and is free from a motion estimation network component. Our algorithm employs a special feature reshaping operation, referred to as PixelShuffle, with a channel attention, which replaces the optical flow computation module. The main idea behind the design is to distribute the information in a feature map into multiple channels and extract motion information by attending the channels for pixel-level frame synthesis. The model given by this principle turns out to be effective in the presence of challenging motion and occlusion. We construct a comprehensive evaluation benchmark and demonstrate that the proposed approach achieves outstanding performance compared to the existing models with a component for optical flow computation.
Referência(s)