Parallel implementation of nonlinear dimensionality reduction methods applied in object segmentation using CUDA in GPU

2011; SPIE; Volume: 8048; Linguagem: Inglês

10.1117/12.884767

ISSN

1996-756X

Autores

Romel Campana-Olivo, Vidya Manian,

Tópico(s)

Advanced Image and Video Retrieval Techniques

Resumo

Manifold learning, also called nonlinear dimensionality reduction, affords a way to understand and visualize the structure of nonlinear hyperspectral datasets. These methods use graphs to represent the manifold topology, and use metrics like geodesic distance, allowing embedding higher dimension objects into lower dimension. However the complexities of some manifold learning algorithms are O(N 3 ), therefore they are very slow (high computational algorithms). In this paper we present a CUDA-based parallel implementation of the three most popular manifold learning algorithms like Isomap, Locally linear embedding, and Laplacian eigenmaps, using CUDA multi-thread model. The result of this dimensionality reduction was employed in segmentation using active contours as an application of these reduced hyperspectral images. The manifold learning algorithms were implemented on a 64-bit workstation equipped with a quad-core Intel® Xeon with 12 GB RAM and two NVIDIA Tesla C1060 GPU cards. Manifold learning outperforms significantly and achieve up to 26x speedup. It also shows good scalability where varying the size of the dataset and the number of K nearest neighbors.

Referência(s)