Artigo Revisado por pares

Efficient computations for large least square support vector machine classifiers

2002; Elsevier BV; Volume: 24; Issue: 1-3 Linguagem: Inglês

10.1016/s0167-8655(02)00190-3

ISSN

1872-7344

Autores

Kok Seng Chua,

Tópico(s)

Blind Source Separation Techniques

Resumo

We observed that the linear system in the training of the least square support vector machine (LSSVM) proposed by Suykens and Vandewalle (Neural process. Lett. 9 (1999a) 293–300; IEEE Trans. Neural Networks 10 (4) (1999b) 907–912) can be placed in a more symmetric form so that for a data set with N data points and m features, the linear system can be solved by inverting an m×m instead of an N×N matrix and storing and working with matrices of size at most m×N. This allows us to apply LSSVM to very large data set with small number of features. Our computations show that a data set with a million data points and 10 features can be trained in only 45 s. We also compared the effectiveness and efficiency of our method to standard LSSVM and standard SVM. An example using a quadratic kernel is also given.

Referência(s)
Altmetric
PlumX