No Reference Video Quality Assessment with authentic distortions using 3-D Deep Convolutional Neural Network
2020; Volume: 32; Issue: 9 Linguagem: Inglês
10.2352/issn.2470-1173.2020.9.iqsp-168
ISSN2470-1173
AutoresRoger Gomez Nieto, Hernán Darío Benítez Restrepo, Roger Figueroa Quintero, Alan C. Bovik,
Tópico(s)Visual Attention and Saliency Detection
ResumoVideo Quality Assessment (VQA) is an essential topic in several industries ranging from video streaming to camera manufacturing. In this paper, we present a novel method for No-Reference VQA. This framework is fast and does not require the extraction of hand-crafted features. We extracted convolutional features of 3-D C3D Convolutional Neural Network and feed one trained Support Vector Regressor to obtain a VQA score. We did certain transformations to different color spaces to generate better discriminant deep features. We extracted features from several layers, with and without overlap, finding the best configuration to improve the VQA score. We tested the proposed approach in LIVE-Qualcomm dataset. We extensively evaluated the perceptual quality prediction model, obtaining one final Pearson correlation of 0:7749±0:0884 with Mean Opinion Scores, and showed that it can achieve good video quality prediction, outperforming other state-of-the-art VQA leading models.
Referência(s)