July 16, 2018
Conference Paper

Improving Automation Transparency: Addressing Some of Machine Learning’s Unique Challenges

Abstract

Abstract. A variety of factors can affect one’s reliance on an automated aid. Some of these factors include one’s per-ception of the system’s trustworthiness, such as perceived reliability of the system or one’s ability to understand the system’s underlying reasoning. A mismatch between the operator’s perception and the true capabilities and charac-teristics of the system can lead to inappropriate reliance on the tool. This improper use of the system can manifest as either underutilization of the technology or complacency resulting from over-trusting the system. Increasing an au-tomated tool’s transparency is one approach that enables the operator to more appropriately rely on the technology. Transparent automated systems provide additional infor-mation that allows the user to see the system’s intent and understand its underlying processes and capabilities. Sev-eral researchers have developed frameworks to support the design of more transparent automation. However, these frameworks may not fully consider the particular challeng-es to transparency design introduced by automation that leverages machine learning. Like all automation, these systems can benefit from transparency. However, artificial intelligence poses new challenges that must be considered when designing for transparency. Unique considerations must be made in terms of the type, and amount or level of transparency information conveyed to the user.

Revised: June 11, 2019 | Published: July 16, 2018

Citation

Fallon C., and L.M. Blaha. 2018. Improving Automation Transparency: Addressing Some of Machine Learning’s Unique Challenges. In Proceedings of the 12th International Conference on Augmented Cognition (AC 2018), July 15-20, 2018, Las Vegas, NV. Lecture Notes in Computer Science, 10915, 245-254. Cham:Springer. PNNL-SA-132786. doi:10.1007/978-3-319-91470-1_21