PaKman: A Scalable Algorithm for Generating Genomic Contigs on Distributed Memory Machines
De novo genome assembly is a fundamental problem in the field of bioinformatics, that aims to assemble the DNA sequence of an unknown genome from numerous short DNA fragments (aka reads) obtained from it. With the advent of high-throughput sequencing technologies, billions of reads can be generated in a matter of hours, necessitating efficient parallelization of the assembly process. While multiple parallel solutions have been proposed in the past, conducting a large-scale assembly at scale remains a challenging problem because of the inherent complexities associated with data movement, and irregular access footprints of memory and I/O operations. In this paper, we present a novel algorithm, called PaKman, to address the problem of performing large-scale genome assemblies on a distributed memory parallel computer. Our approach focuses on improving performance through a combination of novel data structures and algorithmic strategies for reducing the communication and I/O footprint during the assembly process. PaKman presents a solution for the two most time-consuming phases in the full genome assembly pipeline, namely, k-mer counting and contig generation. A key aspect of our algorithm is its graph data structure (PaK-Graph), which comprises fat nodes (or what we call “macro-nodes”) that reduce the communication burden during contig generation. We present an extensive performance and qualitative evaluation of our algorithm across a wide range of genomes (varying in both size and species group), including comparisons to other state-of-the-art parallel assemblers. Our results demonstrate the ability to achieve near-linear speedups on up to 16K cores (tested) on the NERSC Cori supercomputer; perform better than or comparable to other state-of-the-art distributed memory and shared memory tools in terms of performance while delivering comparable (if not better) quality; and reduce time to solution significantly. For instance, PaKman is able to generate a high-quality set of assembled contigs for complex genomes such as the human and bread wheat genomes in under a minute on 16K cores. In addition, PaKman was able to successfully process a 3.1 TB simulated dataset of one of the largest known genomes (to date)-Ambystoma mexicanum (the axolotl), in just over 200 seconds on 16K cores.
Revised: January 12, 2021 |
Published: May 1, 2021