Large computing systems including clusters, clouds, and grids, provide high-performance capabilities that can be utilized for many applications. But as the ubiquity of these systems increases and the scope of analysis being done on them grows, there is a growing need for applications that 1) do not require users to learn the details of high performance systems, and 2) are flexible and adaptive in their usage of these systems to accommodate the best time-to-solution for end users. We introduce a new adaptive interface design and a prototype implementation within the framework of an established middleware framework, MeDICi, for high performance computing systems and describe the applicability of this adaptive design to a real-life scientific workflow. This adaptive framework provides an access model for implementing a processing pipeline using high performance systems that are not local to the data source, making it possible for the compute capabilities at one site to be applied to analysis on data being generated at another site in an automated process. This adaptive design improves overall time-to-solution by moving the data analysis task to the most appropriate resource dynamically, reacting to failures and load fluctuations.
Revised: September 13, 2010 |
Published: August 4, 2010
Citation
Gosney A., C.S. Oehmen, A.S. Wynne, and J.P. Almquist. 2010.An Adaptive Middleware Framework for Scientific Computing at Extreme Scales. In The 2010 IEEE International Conference on Information Reuse and Integration (IRI 2010), 232-238. Piscataway, New Jersey:IEEE.PNNL-SA-71671.doi:10.1109/IRI.2010.5558934