Optimizing matrix-matrix multiplication on intel’s advanced vector extensions multicore processor
2020; Elsevier BV; Volume: 11; Issue: 4 Linguagem: Inglês
10.1016/j.asej.2020.01.003
ISSN2090-4495
AutoresAshraf Mohamed Hemeida, S.A. Hassan, Salem Alkhalaf, Mountasser M.M. Mahmoud, Mahmoud A. Saber, Ayman M. Bahaa-Eldin, Tomonobu Senjyu, Abdullah H. Alayed,
Tópico(s)Distributed and Parallel Computing Systems
ResumoThis paper is focused on Intel Advanced Vector Extension (AVX) which has been borne of the modern developments in AMD processors and Intel itself. Said prescript processes a chunk of data both individually and altogether. AVX is supporting variety of applications such as image processing. Our goal is to accelerate and optimize square single-precision matrix multiplication from 2080 to 4512, i.e. big size ranges. Our optimization is designed by using AVX instruction sets, OpenMP parallelization, and memory access optimization to overcome bandwidth limitations. This paper is different from other papers by concentrating on several main technique and the results therein. Making parallel implementation guidelines of said algorithms, where the target architecture's characteristics need to be taken into consideration when said algorithms are applied are presented. This work has a comparative study of using most popular compilers: Intel C++ compiler 17.0 over Microsoft Visual Studio C++ compiler 2015. Additionally, a comparative study between single-core and multicore platforms has been examined. The obtained results of the proposed optimized algorithms are achieved a performance improvement of 71%, 59%, and 56% for C = A.B, C = A.BT, and C = AT.B separately compared with results that are achieved by implementing the latest Intel Math Kernel Library 2017 SGEMV subroutines.
Referência(s)