The increase in the number of processors needed to build exascale systems implies that the mean time between failure will further decrease, making it increasingly important to develop scalable techniques for fault tolerance. In this paper we develop an efficient checksum-based approach to fault tolerance for data in volatile memory systems, i.e., an approach without the need to save any data on stable persistent storage. The developed scheme is applicable in multiple scenarios, including: 1) online recovery of large read-only data structures from the memory of failed nodes, with very low storage overhead 2) online recovery from soft errors in blocked data, and 3) online recovery of read/write data via in-memory check-pointing. The approach uses a logical multi-dimension view of the data to be protected. Changing the dimensionality of the data view enables a trade-off between multiple factors, including the storage overheads, the checksum generation time, the failure recovery time, and the number of faults that can be tolerated. Experimental results demonstrating effectiveness are presented on a Cray XE6 system.
Revised: February 1, 2016 |
Published: September 9, 2014
Citation
Arafat M.H., S. Krishnamoorthy, and P. Sadayappan. 2014.Checksumming strategies for data in volatile memories. In 43rd International Conference on Parallel Processing Workshops (ICCPW 2014), September 9-12, 2014, Minneapolis, Minnesota, 245-254. Piscataway, New Jersey:IEEE.PNNL-SA-103603.doi:10.1109/ICPPW.2014.41