PNNL @ IPDPS 2022
PNNL researchers organize workshops and present their latest research findings in parallel computation

Virtual
On May 30th through June 3rd, PNNL researchers will hold workshops and present their latest research findings on parallel computing at the 36th IEEE International Parallel and Distributed Processing Symposium. PNNL scientists Nathan Tallent, Kevin Barker, Ponnuswamy Sadayappan, Antonino Tumeo, Sayan Ghosh and Priyanka Ghosh served on program area committees for this conference.
PNNL Presentations, Posters, and Workshops
Heterogeneity in Computing Workshop

Workshop | May 30 | 6:00 a.m. PDT
Workshop Chair: Ryan Friese
Technical Program Committee member: Burcu Mutlu
Heterogeneous computing systems comprise growing numbers of increasingly more diverse computing resources that can be local to one another or geographically distributed. The opportunity and need for effectively utilizing heterogeneous computing resources has given rise to the notions of cluster computing, grid computing, and cloud computing. HCW encourages paper submissions from both the research and industry communities presenting novel ideas on theoretical and practical aspects of computing in heterogeneous computing environments.
QCCC-22: The First International Workshop on Quantum Classical Cooperative Computing
Workshop | May 30 | 7:20 a.m. PDT
Workshop Chair: Ang Li
Program Committee members: Samuel Stein · Bo Fang

The purpose of this workshop is to explore innovative ways of quantum-classical cooperative computing (QCCC) to make quantum computing more effective and scalable in noisy-intermediate scale quantum (NISQ) platforms. The workshop will focus heavily on how classical computing can improve NISQ device execution efficiency, scalability, or compensate for noise impact or technology deficiency, with particular emphasis on demonstrable approaches on existing NISQ platforms, such as IBM-Q, IonQ and Rigetti. READ MORE.
Hybrid Quantum / Classical Algorithms for Machine Learning

QCCC-22 Keynote | May 30 | 7:25 a.m. PDT
Presenter: Nathan Wiebe
This talk will feature a new approach to quantum machine learning that involves using classical machine learning to learn a representation for a dataset that can be embedded in a quantum computer. We will then consider applying this strategy to train a generative model for groundstates of chemistry Hamiltonians that will allow us to predict groundstates given data through a classically learnt representation that converts nuclear positions into weights for a quantum neural network that generates the state. This work shows that quantum / classical Hybrid methods can be a powerful way to learn how to generate groundstates and potentially even give a cheaper alternative to approximate groundstate preparation than phase estimation provides in some settings. READ MORE.
Improving Variational Quantum Algorithms performance through Weighted Quantum Ensembles
QCCC-22 Talk | May 30 | 10:00 a.m. PDT
Presenter: Samuel Stein
Due to considerable error rate and insufficient qubits in supporting full-scale quantum error correction, the Variational Quantum Algorithm (VQA), which adopts a classical optimizer to train a parameterized quantum circuit, stands out as one of the most promising algorithms for Noisy Intermediate-Scale Quantum (NISQ) devices. VQAs are poised to be one of the promising forms of algorithms in the NISQ era. These include (i) the Variational Quantum Eigensolver (VQE) for investigating molecular and nuclear structures, molecular and nuclear dynamics in quantum chemistry, and nuclear physics; (ii) Quantum Approximate Optimization Algorithm (QAOA) for optimizations (e.g., MaxCut in graph analytics); and (iii) Quantum Neural Networks (QNNs) for quantum machine learning. We demonstrate the implementation of a weighted quantum ensemble for improved VQA performance with respect to both final converged performance as well as VQA training speed.
Quantum Processor Performance through Quantum Distance Metrics Over An Algorithm Suite
QCCC-22 Talk | May 30 | 10:30 a.m. PDT
Presenter: Samuel Stein
Currently, we reside in the Noisy Intermediate Scale Quantum (NISQ) computing era. Current quantum processors (QPU's) suffer from a multitude of noise-factors, resulting in erroneous results. These noise factors are comprised of variables such as, but not limited to, imperfect gate fidelity, state preparation and measurement (SPAM) errors, state decay, manufacturing imperfections and specific external interference. Furthermore, different QPU's may use different underlying technologies to implement the physical processor. IBM and Rigetti utilize superconducting architectures for their QPU's, whilst IonQ and Honeywell use trapped ions. The underlying architecture can implicate the performance of the QPU, as for example trapped ion quantum processors have no topological constraints, whereas superconducting processors require physical links between two qubits that are to be entangled. Fortunately, the SWAP operation allows for two qubits positions to be swapped, however at a cost of process performance due to aforementioned noise factors. We demonstrate the benchmarking suite QASMBench, and the usage of it across a multitude of IBM Quantum Experience devices, and how one might use this benchmarking in estimating a Quantum Processors performance.
GrAPL 2022: Workshop on Graphs, Architectures, Programming, and Learning

Workshop | May 30 | 7:30 a.m. PDT
Organizers: Antonino Tumeo · John Feo · Mahantesh Halappanavar
GrAPL is the result of the combination of two IPDPS workshops: GABB: Graph Algorithms Building Blocks and GraML: Workshop on The Intersection of Graph Algorithms and Machine Learning. READ MORE.
NWHy: A Framework for Hypergraph Analytics: Representations, Data Structures and Algorithms

GrAPL Workshop Paper | May 30 | 11:15 a.m. PDT
PNNL Authors: Jesun Firoz · Andrew Lumsdaine
This paper presents NWHypergraph (NWHy), a parallel high-performance C++ framework for both exact and approximate hypergraph analytics. NWHy provides data structures for various representations of hypergraphs and their associated graph projections (lower order approximations), including a new technique for hypergraph representation called adjoin graphs. READ MORE.
Bit-GraphBLAS: Bit-Level Optimizations of Matrix-Centric Graph Processing on GPU

Accepted Paper | May 31 | 11:39 a.m. PDT
PNNL Authors: Ang Li · Nathan Tallent · Kevin Barker
In a general graph data structure like an adjacency matrix, when edges are homogenous, the connectivity of two nodes can be sufficiently represented using a single bit. This insight has, however, not yet been adequately exploited by the existing matrix-centric graph processing frameworks. This work fills the void by systematically exploring the bit-level representation of graphs and the corresponding optimizations to the graph operations. READ MORE.
High-order Line Graphs of Non-uniform Hypergraphs: Algorithms, Applications, and Experimental Analysis

Accepted Paper | June 1 | 11:51 a.m. PDT
PNNL Authors: Jesun Firoz · Sinan Aksoy · Ilya Amburg · Andrew Lumsdaine · Cliff Joslyn · Brenda Praggastis
In this work, we contribute the first scalable framework to approximate non-uniform hypergraphs metrics using high-orders-line graphs of hypergraphs. Our multi-stage, versatile framework starts from the original hypergraph, and consists of multiple stages, including pre-processing, s-line graph construction, squeezing the s-line graph, ands-measure (defined later)computation. More specifically, we present efficient, parallel algorithms to accelerates-line graph construction. READ MORE.
SpectralFly: Ramanujan Graphs as Flexible and Efficient Interconnection Networks

Accepted Paper | June 2 | 8:33 a.m. PDT
PNNL Authors: Stephen Young · Sinan Aksoy · Jesun Firoz · Roberto Gioiosa · Mark Raugas · Tobias Hagge · Juan Andres Escobedo Contreras
In recent years, graph theoretic considerations have become increasingly important in the design of HPC interconnection topologies. One approach is to seek optimal or near-optimal families of graphs with respect to a particular graph theoretic property, such as diameter. In this work, we consider topologies which optimize the spectral gap. We study a novel HPC topology, SpectralFly, designed around the Ramanujan graph construction of Lubotzky, Phillips, and Sarnak (LPS). READ MORE.
QuaL2M: Learning Quantitative Performance of Latency-Sensitive Code

Accepted Paper for the 17th International Workshop on Automatic Performance Tuning | June 3 | 5:05 a.m. PDT
PNNL Authors: Arun Sathanur · Nathan Tallent
Quantitative performance predictions are more informative than qualitative. However, modeling of latency-sensitive code, with cost distributions of high variability and heavy tails, is extremely difficult. To date, quantitative prediction of such code has been limited to either special cases (e.g., best-case performance) or resource-intensive methods (e.g., simulation). We present QuaL2M, a method for learning quantitative performance of latency-sensitive CPU code. READ MORE.
ExSAIS 2022: Workshop on Extreme Scaling of AI for Science

Workshop | June 3 | 8:00 a.m. PDT
Organizers: Antonino Tumeo · Mahantesh Halappanavar · Svitlana Volkova · Robert Rallo
The evolution of machine perception to machine learning and reasoning, and ultimately machine intelligence, has the potential to significantly impact acceleration and advancement of autonomous scientific discovery and the operation of scientific instruments. While machine reasoning will enable intelligent systems to better understand and interact with their physical world, machine intelligence through modeling, simulation and automation, closes the gap between experiments, extreme computing, and scientific discovery. In order to usher in this new era of autonomous science, advances in several areas of artificial intelligence and other disciplines e.g., high-performance computing, data engineering need to come together. Therefore, the goal of this workshop is to bring together researchers from diverse backgrounds to enable extreme scaling of AI for science. READ MORE.
Scaling AI for Science

ExSAIS Workshop Panel | June 3 | 9:05 a.m. PDT
PNNL Panelist: Draguna Vrabie
This panel discussion will address the overarching goal of enabling semi-autonomous and autonomous AI-driven predictive and prescriptive scientific discovery at scale by integrating extreme-scale heterogeneous and reconfigurable computing paradigms, multiscale mathematics, physics-based simulation, data sciences and engineering to address challenges across science, scientific instruments, and security domains e.g., biology, chemistry, and material science.