Advanced Computing, Mathematics and Data
Research Highlights
September 2011
Novel High-Performance Hybrid System for Semantic Factoring of Graph Databases
PNNL team wins second place in "Billion Triple Challenge"
Massive amounts of data can be analyzed using a novel hybrid system for semantic factoring in graph databases. Each line represents a frequent relationship type between entities in the dataset. Enlarge Image
Results: Imagine trying to analyze all of the English entries in Wikipedia. Now imagine you've got 20 times as much information. That's the challenge scientists face when working with gigabyte data sets. Scientists at Pacific Northwest National Laboratory, Sandia National Laboratories and Cray, Inc. developed an application to take on such massive data analysis challenges. Their novel high-performance computing application uses semantic factoring to organize data, bringing out hidden connections and threads.
The team then used their applications to analyze the massive datasets for the Billion Triple Challenge, an international competition focused on demonstrating capability and innovation for dealing with very large semantic graph databases, known as SGDs.
Why it matters: Science. Security. In both areas, people must turn massive data sets into knowledge that can be used to save lives.
Methods: As SGD technology grows to address components from extremely large data stores, it is becoming increasingly important to be able to use high-performance computational resources for analysis, interpretation, and visualization, especially as it pertains to the innate structure. However, the ability to understand the semantic structure of a vast SGD still needs both a coherent methodology and the high-performance computing platform to exercise the necessary methods.
The team took advantage of the Cray XMT architecture, which allowed all 624 gigabytes of input data to be held in RAM. They were then able to scalably perform a variety of novel tasks for descriptive analysis of the inherent semantics in the dataset provided by the Billion Triple Challenge, including identifying the ontological structure, the sensitivity of connectivity within the relationships, and the interaction among different contributions to the dataset.
What's next: The semantic database system research team is developing a prototype that can be adapted to a variety of application domains and datasets, including working with the bio2rdf.org and future billion-triple-challenge datasets in prototype testing and evaluation.
Acknowledgments: This work was funded under the Center for Adaptive Supercomputing Software-Multithreaded Architectures (CASS-MT) at the Department of Energy's Pacific Northwest National Laboratory.
Research Team: Cliff Joslyn, Bob Adolf, Sinan al-Saffar, John Feo, David Haglin (PNNL); Eric Goodman, Greg Mackey (Sandia); and David Mizell (Cray, Inc.)
Reference: Joslyn C, R Adolf, S al-Saffar, J Feo, E Goodman, D Haglin, G Mackey, and D Mizell. 2010. "High Performance Semantic Factoring of Giga-Scale Semantic Graph Databases." Semantic Web Challenge Billion Triple Challenge 2010.