S2S2: Semantic Stacking for Robust Semantic Segmentation in Medical Imaging

Yimu Pan, Sitao Zhang, Alison D. Gernand, Jeffery A. Goldstein, James Wang

Research output: Contribution to journalConference articlepeer-review

Abstract

Robustness and generalizability in medical image segmentation are often hindered by scarcity and limited diversity of training data, which stands in contrast to the variability encountered during inference. While conventional strategies-such as domain-specific augmentation, specialized architectures, and tailored training procedures-can alleviate these issues, they depend on the availability and reliability of domain knowledge. When such knowledge is unavailable, misleading, or improperly applied, performance may deteriorate. In response, we introduce a novel, domain-agnostic, add-on, and data-driven strategy inspired by image stacking in image denoising. Termed “semantic stacking,” our method estimates a denoised semantic representation that complements the conventional segmentation loss during training. This method does not depend on domain-specific assumptions, making it broadly applicable across diverse image modalities, model architectures, and augmentation techniques. Through extensive experiments, we validate the superiority of our approach in improving segmentation performance under diverse conditions.

Original languageEnglish (US)
Pages (from-to)6335-6344
Number of pages10
JournalProceedings of the AAAI Conference on Artificial Intelligence
Volume39
Issue number6
DOIs
StatePublished - Apr 11 2025
Event39th Annual AAAI Conference on Artificial Intelligence, AAAI 2025 - Philadelphia, United States
Duration: Feb 25 2025Mar 4 2025

All Science Journal Classification (ASJC) codes

  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'S2S2: Semantic Stacking for Robust Semantic Segmentation in Medical Imaging'. Together they form a unique fingerprint.

Cite this