Artigo Acesso aberto Revisado por pares

Label noise correction and application in crowdsourcing

2016; Elsevier BV; Volume: 66; Linguagem: Inglês

10.1016/j.eswa.2016.09.003

ISSN

1873-6793

Autores

Bryce Nicholson, Victor S. Sheng, Jing Zhang,

Tópico(s)

Anomaly Detection Techniques and Applications

Resumo

The important task of correcting label noise is addressed infrequently in literature. The difficulty of developing a robust label correction algorithm leads to this silence concerning label correction. To break the silence, we propose two algorithms to correct label noise. One utilizes self-training to re-label noise, called Self-Training Correction (STC). Another is a clustering-based method, which groups instances together to infer their ground-truth labels, called Cluster-based Correction (CC). We also adapt an algorithm from previous work, a consensus-based method called Polishing that consults with an ensemble of classifiers to change the values of attributes and labels. We simplify Polishing such that it only alters labels of instances, and call it Polishing Labels (PL). We experimentally compare our novel methods with Polishing Labels by examining their improvements on the label qualities, model qualities, and AUC metrics of binary and multi-class data sets under different noise levels. Our experimental results demonstrate that CC significantly improves label qualities, model qualities, and AUC metrics consistently. We further investigate how these three noise correction algorithms improve the data quality, in terms of label accuracy, in the context of image labeling in crowdsourcing. First, we look at three consensus methods for inferring a ground-truth label from the multiple noisy labels obtained from crowdsourcing, i.e., Majority Voting (MV), Dawid Skene (DS), and KOS. We then apply the three noise correction methods to correct labels inferred by these consensus methods. Our experimental results show that the noise correction methods improve the labeling quality significantly. As an overall result of our experiments, we conclude that CC performs the best. Our research has illustrated the viability of implementing noise correction as another line of defense against labeling error, especially in a crowdsourcing setting. Furthermore, it presents the feasibility of the automation of an otherwise manual process of analyzing a data set, and correcting and cleaning the instances, an expensive and time-consuming task.

Referência(s)
Altmetric
PlumX