Artificial Intelligence (AI) has been a part of the medical community for decades in the form of Clinical Decision Support Systems (CDSS) to aide physicians in diagnosis and categorization of patients (1). Recent years have seen a shift from expert-derived models to the integration of machine learning (ML) algorithms to drive the output of these AI systems due to the ability of ML models to more accurately make predictions by exploiting higher dimensional and often complex data. In many cases ML models gain their advantage in accuracy by capture complex and often non-linear relationships between features being used to make the prediction. However, the hype and excitement around these methods are tempered by the clinical utility of often black-box solutions driven by skepticism of results that are difficult for practitioners to not only interpret but explain to their patients (1-3). This skepticism is not unfounded as multiple examples of black-box solutions identifying incidental correlates as the key predictors have highlighted the potential bias in a training set, or reward system, that a ML model may exploit; for example a model discerning wolves from huskies based on snow in the background rather than features of the dogs (1, 4, 5).
Published: July 15, 2021
Citation
Webb-Robertson B.M. 2021.Explainable Artificial Intelligence in Endocrinological Medical Research.Journal of Clinical Endocrinology and Metabolism 106, no. 7:e2809 - e2810.PNNL-SA-160901.doi:10.1210/clinem/dgab237