Show simple item record

dc.contributor.authorDoddaiah, Ramesh
dc.contributor.authorParvatharaju, Prathyush S.
dc.contributor.authorRundensteiner, Elke
dc.contributor.authorHartvigsen, Thomas
dc.date.accessioned2025-06-13T20:59:54Z
dc.date.available2025-06-13T20:59:54Z
dc.date.issued2024-03-04
dc.identifier.urihttps://hdl.handle.net/1721.1/159417
dc.description.abstractExplainability helps users trust deep learning solutions for time series classification. However, existing explainability methods for multi-class time series classifiers focus on one class at a time, ignoring relationships between the classes. Instead, when a classifier is choosing between many classes, an effective explanation must show what sets the chosen class apart from the rest. We now formalize this notion, studying the open problem of class-specific explainability for deep time series classifiers, a challenging and impactful problem setting. We design a novel explainability method, DEMUX, which learns saliency maps for explaining deep multi-class time series classifiers by adaptively ensuring that its explanation spotlights the regions in an input time series that a model uses specifically to its predicted class. DEMUX adopts a gradient-based approach composed of three interdependent modules that combine to generate consistent, class-specific saliency maps that remain faithful to the classifier’s behavior yet are easily understood by end users. We demonstrate that DEMUX outperforms nine state-of-the-art alternatives on seven popular datasets when explaining two types of deep time series classifiers. We analyze runtime performance, show the impacts of hyperparameter selection, and introduce a detailed study of perturbation methods for time series. Further, through a case study, we demonstrate that DEMUX’s explanations indeed highlight what separates the predicted class from the others in the eyes of the classifier.en_US
dc.publisherSpringer Londonen_US
dc.relation.isversionofhttps://doi.org/10.1007/s10115-024-02073-yen_US
dc.rightsArticle is made available in accordance with the publisher's policy and may be subject to US copyright law. Please refer to the publisher's site for terms of use.en_US
dc.sourceSpringer Londonen_US
dc.titleExplaining deep multi-class time series classifiersen_US
dc.typeArticleen_US
dc.identifier.citationDoddaiah, R., Parvatharaju, P.S., Rundensteiner, E. et al. Explaining deep multi-class time series classifiers. Knowl Inf Syst 66, 3497–3521 (2024).en_US
dc.contributor.departmentMassachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratoryen_US
dc.relation.journalKnowledge and Information Systemsen_US
dc.eprint.versionAuthor's final manuscripten_US
dc.type.urihttp://purl.org/eprint/type/JournalArticleen_US
eprint.statushttp://purl.org/eprint/status/PeerRevieweden_US
dc.date.updated2025-03-27T13:47:35Z
dc.language.rfc3066en
dc.rights.holderThe Author(s), under exclusive licence to Springer-Verlag London Ltd., part of Springer Nature
dspace.embargo.termsY
dspace.date.submission2025-03-27T13:47:35Z
mit.journal.volume66en_US
mit.licensePUBLISHER_POLICY
mit.metadata.statusAuthority Work and Publication Information Neededen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record