We focus on merging high-performance computing with data-centric analysis capabilities to solve significant problems in energy, the environment, and national security. PNNL has made scientific breakthroughs and advanced frontiers in high-performance computer science, computational biology and bioinformatics, subsurface simulation modeling, and multiscale mathematics.
In April 2014, the Atmospheric Radiation Measurement (ARM) Data Integration team hosted their counterparts from Brookhaven National Laboratory as part of a three-day meeting aimed at furthering their expertise using the ARM Data Integrator, or ADI. During their tutorial, the collective teams honed their skills using a “pair programming” technique that pairs two programmers at one workstation, allowing code to be written and examined simultaneously for immediate feedback. The group also used pair programming to migrate existing code to a new Linux-based operating system.
Hiding the complexities that underpin exascale system operations from application developers is a critical challenge facing teams designing next-generation supercomputers. To tackle the problem, PNNL computer scientists are developing formal design processes based on Concurrent Collections (CnC), a programming model that combines task and data parallelism. Using graphs, they transformed LULESH proxy application code that models hydrodynamics into a complete CnC specification. These specifications capture data and control dependencies and separate computations from implementation issues, concealing the complexities of exascale systems, dramatically decreasing development cost, and increasing opportunities for automatic performance optimizations.
PNNL Computer Scientists Share Editing Duties for Journal of Parallel and Distributed Computing Special Issue
Dr. John Feo and Dr. Antonino Tumeo, of CSMD's Data Intensive Scientific and High Performance Computing groups, respectively, will serve as guest editors for a special issue of the Journal of Parallel and Distributed Computing devoted to "Architectures and Algorithms for Irregular Applications." The special issue will explore new solutions for efficient design, development, and execution of irregular applications in current and future computing system architectures.
To improve the numerical methods and algorithms used to analyze and model physical phenomena associated with fluid flows and the forces that affect them at various scales and boundary conditions, scientists from PNNL and the University of South Florida demonstrated the viability of a new method: smoothed particle hydrodynamics-continuous boundary force, or SPH-CBF. Their novel method solves Navier-Stokes equations subjected to Robin boundary conditions using an SPH method for solving partial differential equations, providing a significant advancement to existing SPH theory. The formulation also uses SPH strengths in modeling diverse physics problems, such as those involving atmospheric systems, energy materials and processes, subsurface flow and transport, and high-strength materials, which are relevant to important DOE mission objectives.
To meet DOE's power consumption goals (20-25 MW) for exascale computing systems and remain practical tools, next-generation systems must be considerably more power and energy efficient than today's supercomputers. Using methods that span processor architecture and system integration to performance and power modeling, scientists within PNNL's Performance and Architecture Laboratory, or PAL, have developed power-aware algorithms that use an accurate per-core proxy power sensor model to estimate the active power of each core. Their methods also have afforded the first workload-specific quantitative power modeling capability that accurately captures workload phases, their impact on power consumption, and the effects of system architecture and processor clock speeds.