This paper presents Gswitch, a pattern-based algorithmic autotuning system that dynamically switches to the suitable optimization variants with negligible overhead. Its novelty is a small set of algorithmic patterns that enables configurable assembling of algorithm variants. The fast transition of Gswitch is based on a machine learning model trained from 644 real graphs from the network repository. In addition, Gswitch provides succinct programming interface which hides all low-level tuning details. We evaluate Gswitch for typical graph algorithms (BFS, CC, PR, SSSP, and BC) on Nvidia Kepler and Pascal GPUs. The results show that Gswitch runs up to 10× faster than the best configuration of the state-of-the-art programmable GPU-based graph processing libraries on ten representative graphs. Gswitch wins on 92.4% cases of 644 graph data which is the largest dataset evaluation reported to date.
Revised: February 12, 2021 |
Published: February 16, 2019
Citation
Meng K., J. Li, G. Tan, and N. Sun. 2019.A Pattern Based Algorithmic Autotuner for Graph Processing on GPUs. In Proceedings of the 24th Symposium on Principles and Practice of Parallel Programming (PPoPP 2019), February 16-20, 2019, Washington, DC, 201-213. New York, New York:Association for Computing Machinery.PNNL-SA-140392.doi:10.1145/3293883.3295716