Show simple item record

dc.contributor.authorSanneman, Lindsay
dc.contributor.authorShah, Julie A
dc.date.accessioned2025-11-05T18:30:33Z
dc.date.available2025-11-05T18:30:33Z
dc.date.issued2022-06-22
dc.identifier.urihttps://hdl.handle.net/1721.1/163525
dc.description.abstractRecent advances in artificial intelligence (AI) have drawn attention to the need for AI systems to beunderstandable to human users. The explainable AI (XAI) literature aims to enhance human under-standing and human-AI team performance by providing users with necessary information about AI sys-tem behavior. Simultaneously, the human factors literature has long addressed importantconsiderations that contribute to human performance, including how to determine human informa-tional needs, human workload, and human trust in autonomous systems. Drawing from the human fac-tors literature, we propose the Situation Awareness Framework for Explainable AI (SAFE-AI), a three-level framework for the development and evaluation of explanations about AI system behavior. Ourproposed levels of XAI are based on the informational needs of human users, which can be deter-mined using the levels of situation awareness (SA) framework from the human factors literature. Basedon our levels of XAI framework, we also suggest a method for assessing the effectiveness of XAI sys-tems. We further detail human workload considerations for determining the content and frequency ofexplanations as well as metrics that can be used to assess human workload. Finally, we discuss theimportance of appropriately calibrating user trust in AI systems through explanations along with othertrust-related considerations for XAI, and we detail metrics that can be used to evaluate user trust inthese systems.en_US
dc.language.isoen
dc.publisherTaylor & Francisen_US
dc.relation.isversionofhttps://doi.org/10.1080/10447318.2022.2081282en_US
dc.rightsCreative Commons Attribution-NonCommercial-NoDerivativesen_US
dc.rights.urihttps://creativecommons.org/licenses/by-nc-nd/4.0/en_US
dc.sourceTaylor & Francisen_US
dc.titleThe Situation Awareness Framework for Explainable AI (SAFE-AI) and Human Factors Considerations for XAI Systemsen_US
dc.typeArticleen_US
dc.identifier.citationSanneman, L., & Shah, J. A. (2022). The Situation Awareness Framework for Explainable AI (SAFE-AI) and Human Factors Considerations for XAI Systems. International Journal of Human–Computer Interaction, 38(18–20), 1772–1788. https://doi.org/10.1080/10447318.2022.2081282en_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Aeronautics and Astronauticsen_US
dc.relation.journalInternational Journal of Human–Computer Interactionen_US
dc.eprint.versionFinal published versionen_US
dc.type.urihttp://purl.org/eprint/type/JournalArticleen_US
eprint.statushttp://purl.org/eprint/status/PeerRevieweden_US
dc.date.updated2025-11-05T18:11:58Z
dspace.orderedauthorsSanneman, L; Shah, JAen_US
dspace.date.submission2025-11-05T18:12:12Z
mit.journal.volume38en_US
mit.journal.issue18-20en_US
mit.licensePUBLISHER_CC
mit.metadata.statusAuthority Work and Publication Information Neededen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record