Causality - Exploiting Multi-Modal Data
Author(s)
Uhler, Caroline
Download3711896.3736799.pdf (624.1Kb)
Publisher Policy
Publisher Policy
Article is made available in accordance with the publisher's policy and may be subject to US copyright law. Please refer to the publisher's site for terms of use.
Terms of use
Metadata
Show full item recordAbstract
Massive data collection holds the promise of a better understanding of complex phenomena and ultimately, of better decisions. Representation learning has become a key driver of deep learning applications, since it allows learning latent spaces that capture important properties of the data without requiring any supervised annotations. While representation learning has been hugely successful in predictive tasks, it can fail miserably in causal tasks including predicting the effect of an intervention. This calls for a marriage between representation learning and causal inference. An exciting opportunity in this regard stems from the growing availability of multi-modal and interventional data (in medicine, advertisement, education, etc.). However, these datasets are still miniscule compared to the action spaces of interest in these applications (e.g. interventions can take on continuous values like the dose of a drug or can be combinatorial as in combinatorial drug therapies). In this talk, we will present a statistical and computational framework for causal representation learning from multi-modal data and its application towards optimal intervention design.
Description
KDD '25, August 3–7, 2025, Toronto, ON, Canada
Date issued
2025-08-03Department
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer SciencePublisher
ACM|Proceedings of the 31st ACM SIGKDD Conference on Knowledge Discovery and Data Mining V.2
Citation
Caroline Uhler. 2025. Causality - Exploiting Multi-Modal Data. In Proceedings of the 31st ACM SIGKDD Conference on Knowledge Discovery and Data Mining V.2 (KDD '25). Association for Computing Machinery, New York, NY, USA, 3–4.
Version: Final published version
ISBN
979-8-4007-1454-2