dc.contributor.author | Da, Longchao | |
dc.contributor.author | Wang, Rui | |
dc.contributor.author | Xu, Xiaojian | |
dc.contributor.author | Bhatia, Parminder | |
dc.contributor.author | Kass-Hout, Taha | |
dc.contributor.author | Wei, Hua | |
dc.contributor.author | Xiao, Cao | |
dc.date.accessioned | 2025-09-09T20:43:19Z | |
dc.date.available | 2025-09-09T20:43:19Z | |
dc.date.issued | 2025-08-03 | |
dc.identifier.isbn | 979-8-4007-1454-2 | |
dc.identifier.uri | https://hdl.handle.net/1721.1/162623 | |
dc.description | KDD ’25, August 3–7, 2025, Toronto, ON, Canada | en_US |
dc.description.abstract | Medical imaging is crucial for diagnosing a patient's health condition, and accurate segmentation of these images is essential for isolating regions of interest to ensure precise diagnosis and treatment planning. Existing methods primarily rely on bounding boxes or point-based prompts, while few have explored text-related prompts, despite clinicians often describing their observations and instructions in natural language. To address this gap, we first propose a RAG-based free-form text prompt generator that leverages the domain corpus to generate diverse and realistic descriptions. Then, we introduce FLanS, a novel medical image segmentation model that handles various free-form text prompts, including professional anatomy-informed queries, anatomy-agnostic position-driven queries, and anatomy-agnostic size-driven queries. Additionally, our model also incorporates a symmetry-aware canonicalization module to ensure consistent, accurate segmentations across varying scan orientations and reduce confusion between the anatomical position of an organ and its appearance in the scan. FLanS is trained on a large-scale dataset of over 100k medical images from 7 public datasets. Comprehensive experiments demonstrate the model's superior language understanding and segmentation precision, along with a deep comprehension of the relationship between them, outperforming SOTA baselines on both in-domain and out-of-domain datasets. | en_US |
dc.publisher | ACM|Proceedings of the 31st ACM SIGKDD Conference on Knowledge Discovery and Data Mining V.2 | en_US |
dc.relation.isversionof | https://doi.org/10.1145/3711896.3736963 | en_US |
dc.rights | Article is made available in accordance with the publisher's policy and may be subject to US copyright law. Please refer to the publisher's site for terms of use. | en_US |
dc.source | Association for Computing Machinery | en_US |
dc.title | FlanS: A Foundation Model for Free-Form Language-based Segmentation in Medical Images | en_US |
dc.type | Article | en_US |
dc.identifier.citation | Longchao Da, Rui Wang, Xiaojian Xu, Parminder Bhatia, Taha Kass-Hout, Hua Wei, and Cao Xiao. 2025. FlanS: A Foundation Model for Free-Form Language-based Segmentation in Medical Images. In Proceedings of the 31st ACM SIGKDD Conference on Knowledge Discovery and Data Mining V.2 (KDD '25). Association for Computing Machinery, New York, NY, USA, 404–414. | en_US |
dc.contributor.department | Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science | en_US |
dc.identifier.mitlicense | PUBLISHER_POLICY | |
dc.eprint.version | Final published version | en_US |
dc.type.uri | http://purl.org/eprint/type/ConferencePaper | en_US |
eprint.status | http://purl.org/eprint/status/NonPeerReviewed | en_US |
dc.date.updated | 2025-09-01T07:50:48Z | |
dc.language.rfc3066 | en | |
dc.rights.holder | The author(s) | |
dspace.date.submission | 2025-09-01T07:50:48Z | |
mit.license | PUBLISHER_CC | |
mit.metadata.status | Authority Work and Publication Information Needed | en_US |