This paper describes the scalability of linear algebra kernels based on remote memory access approach. The current approach differs from the other linear algebra algorithms by the explicit use of shared memory and remote memory access (RMA) communication rather than message passing. It is suitable for clusters and scalable shared memory systems. The experimental results on large scale systems (Linux-Infiniband cluster, Cray XT) demonstrate consistent performance advantages over ScaLAPACK suite, the leading implementation of parallel linear algebra algorithms used today. For example, on a Cray XT4 for a matrix size of 102400, our RMA-based matrix multiplication achieved over 55 teraflops while ScaLAPACK’s pdgemm measured close to 42 teraflops on 10000 processes.
Revised: December 1, 2010 |
Published: September 13, 2010
Citation
Krishnan M., R.R. Lewis, and A. Vishnu. 2010.Scaling Linear Algebra Kernels using Remote Memory Access. In Proceedings of the 39th International Conference on Parallel Processing Workshops (ICPPW), 369-376. Los Alamitos, California:IEEE Computer Society.PNNL-SA-71737.doi:10.1109/ICPPW.2010.57