July 26, 2025
Report

Yes, No, Maybe So: Human Factors Considerations for Fostering Calibrated Trust in Foundation Models Under Uncertainty

Abstract

High-stakes analytical environments require analysts to evaluate evidence and generate conclusions to inform critical decisions often under conditions of uncertainty. Probabilistic decision-making based on incomplete or inaccurate information can reduce productivity, compromise national interests, and endanger public safety. Researchers are developing expert systems built on foundation models (FMs) to support analysts’ decision-making processes by enabling human-artificial intelligence (AI) teaming, in part through the quantification and expression of uncertainty information. As FMs continue to mature, it is imperative to correspondingly consider analysts’ needs for appropriately interpreting and using uncertainty information. However, prior research indicates that it remains unclear how analysts engage with FM-generated uncertainty information and the extent to which these interactions influence trust in, and reliance on, expert systems. We plan to review the state of the science and conduct an exploratory, qualitative study to (a) understand how properly communicated uncertainty can foster calibrated trust and appropriate reliance and (b) identify approaches for effectively conveying FM-generated uncertainty information during analytical workflows. We will administer semi-structured interviews with analysts from a specific high-stakes analytical environment to collect their current experiences with job-related uncertainty and their impressions when viewing FM-generated uncertainty information. During the interview protocol, participants will be presented with several different FM outputs and invited to discuss their thoughts and beliefs about the uncertainty information displayed. Participants may provide insights into how trust and reliance may be influenced by uncertainty. The results of this study will help us to better understand how analysts currently interpret and use uncertainty information. Our findings may inform human factors recommendations for effectively conveying uncertainty information to foster calibrated trust in, and appropriate reliance on, expert systems. Interaction designers and FM developers can use this knowledge to enhance human-AI teaming and ensure the responsible deployment of FM-based expert systems in analytical workflows.

Published: July 26, 2025

Citation

Dreslin B.D., and J.A. Baweja. 2025. Yes, No, Maybe So: Human Factors Considerations for Fostering Calibrated Trust in Foundation Models Under Uncertainty Richland, WA: Pacific Northwest National Laboratory.

Research topics