| dc.contributor.author | Suh, Ashley | |
| dc.contributor.author | Hurley, Isabelle | |
| dc.contributor.author | Smith, Nora | |
| dc.contributor.author | Siu, Ho Chit | |
| dc.date.accessioned | 2025-12-17T15:30:46Z | |
| dc.date.available | 2025-12-17T15:30:46Z | |
| dc.date.issued | 2025-04-25 | |
| dc.identifier.isbn | 979-8-4007-1395-8 | |
| dc.identifier.uri | https://hdl.handle.net/1721.1/164373 | |
| dc.description | CHI EA ’25, Yokohama, Japan | en_US |
| dc.description.abstract | This late-breaking work presents a large-scale analysis of explainable AI (XAI) literature to evaluate claims of human explainability. We collaborated with a professional librarian to identify 18,254 papers containing keywords related to explainability and interpretability. Of these, we find that only 253 papers included terms suggesting human involvement in evaluating an XAI technique, and just 128 of those conducted some form of a human study. In other words, fewer than 1% of XAI papers (0.7%) provide empirical evidence of human explainability when compared to the broader body of XAI literature. Our findings underscore a critical gap between claims of human explainability and evidence-based validation, raising concerns about the rigor of XAI research. We call for increased emphasis on human evaluations in XAI studies and provide our literature search methodology to enable both reproducibility and further investigation into this widespread issue. | en_US |
| dc.publisher | ACM|Extended Abstracts of the CHI Conference on Human Factors in Computing Systems | en_US |
| dc.relation.isversionof | https://doi.org/10.1145/3706599.3719964 | en_US |
| dc.rights | Article is made available in accordance with the publisher's policy and may be subject to US copyright law. Please refer to the publisher's site for terms of use. | en_US |
| dc.source | Association for Computing Machinery | en_US |
| dc.title | Fewer Than 1% of Explainable AI Papers Validate Explainability with Humans | en_US |
| dc.type | Article | en_US |
| dc.identifier.citation | Ashley Suh, Isabelle Hurley, Nora Smith, and Ho Chit Siu. 2025. Fewer Than 1% of Explainable AI Papers Validate Explainability with Humans. In Proceedings of the Extended Abstracts of the CHI Conference on Human Factors in Computing Systems (CHI EA '25). Association for Computing Machinery, New York, NY, USA, Article 276, 1–7. | en_US |
| dc.contributor.department | Lincoln Laboratory | en_US |
| dc.identifier.mitlicense | PUBLISHER_POLICY | |
| dc.eprint.version | Final published version | en_US |
| dc.type.uri | http://purl.org/eprint/type/ConferencePaper | en_US |
| eprint.status | http://purl.org/eprint/status/NonPeerReviewed | en_US |
| dc.date.updated | 2025-08-01T08:24:47Z | |
| dc.language.rfc3066 | en | |
| dc.rights.holder | The author(s) | |
| dspace.date.submission | 2025-08-01T08:24:48Z | |
| mit.license | PUBLISHER_POLICY | |
| mit.metadata.status | Authority Work and Publication Information Needed | en_US |