Trusted and Responsible Artificial Intelligence
Trusted and Responsible Artificial Intelligence
Current artificial intelligence (AI) evaluation processes can mislead scientists into developing models that are biased, not operational, and lack transparency. Understanding how a machine learning model performance will change during real-world operation, identifying as many modes of model failure as possible, and explaining why the model behaves as it does is particularly important for mission-critical systems that support national security missions.
Lab-Level Communications Priority Topics
Computing