Computational science and engineering communities develop complex application software with multiple mathematical models that need to interact with one another. Partly due to complexity of verifying scientific software, and partly because of the way incentives work in science, there has been insufficient testing of these codes. With a spotlight on the results produced with scientific software, and increasing awareness of software testing and verification as a critical contributor to the reliability of these results, testing is gaining more attention by the developing teams. However, many science teams struggle to find a good solution for themselves due either to lack of training or lack of resources within the team. In this experience paper, we describe test development methodologies utilized in two different scenarios: one explains a methodology for building granular tests where none existed before, while the second demonstrates a methodology for selecting test cases that build confidence in the software through a process similar to scaffolding. The common insight from both the experiences is that testing should be a part of software design from the beginning for better software and scientific productivity.
Revised: September 20, 2019 |
Published: June 2, 2018
Citation
Dubey A., and H. Wan. 2018.Methodology for Building Granular Testing in Multicomponent Scientific Software. In Proceedings of the International Workshop on Software Engineering for Science (SE4Science 2018), June 2, 2018, Gothenburg, Sweden, 9-15. New York, New York:ACM.PNNL-SA-132464.doi:10.1145/3194747.3194751