Artigo Acesso aberto Revisado por pares

Speeding up multiple instance learning classification rules on GPUs

2014; Springer Science+Business Media; Volume: 44; Issue: 1 Linguagem: Inglês

10.1007/s10115-014-0752-0

ISSN

0219-1377

Autores

Alberto Cano, Amelia Zafra, Sebastián Ventura,

Tópico(s)

Video Analysis and Summarization

Resumo

Multiple instance learning is a challenging task in supervised learning and data mining. However, algorithm performance becomes slow when learning from large-scale and high-dimensional data sets. Graphics processing units (GPUs) are being used for reducing computing time of algorithms. This paper presents an implementation of the G3P-MI algorithm on GPUs for solving multiple instance problems using classification rules. The GPU model proposed is distributable to multiple GPUs, seeking for its scalability across large-scale and high-dimensional data sets. The proposal is compared to the multi-threaded CPU algorithm with streaming SIMD extensions parallelism over a series of data sets. Experimental results report that the computation time can be significantly reduced and its scalability improved. Specifically, an speedup of up to 149 $$\times $$ can be achieved over the multi-threaded CPU algorithm when using four GPUs, and the rules interpreter achieves great efficiency and runs over 108 billion genetic programming operations per second.

Referência(s)