Thrust 1: Models and Methods
MARS seeks to make significant advances in new methods that can help AI systems and machine learning models become more transparent and interpretable for domain scientists.
AI systems can only be labeled as interpretable if the logical chain behind their predictions and classifications can be articulated in such a way that a human can place that logical chain in the larger context. This requires that a problem can be represented in such a way as to facilitate the relation of a prediction to the underlying data and decisions that support it.
This thrust will explore methods for codifying problems in a way that can be communicated to an artificial reasoning system for interrogation and the resulting information and predictions can be interpreted and verified by the human requesting the information.
Thrust 2: AI Systems
MARS is interested in advancing research that ensures AI systems can strategically adapt—and communicate their decision processes—in dynamical environments.
Strategic in this context is the ability to balance tasks of exploration and exploitation during sequential decision-making settings to ensure science objectives are best addressed, even under limited information and partial model knowledge of the environment.
Real-world domains with use for such systems include military applications, power grid security, cybersecurity, physical security, material sciences, Earth systems, and transportation system management.