TY - GEN
T1 - Watch the Watchers! On the Security Risks of Robustness-Enhancing Diffusion Models
AU - Li, Changjiang
AU - Pang, Ren
AU - Cao, Bochuan
AU - Chen, Jinghui
AU - Ma, Fenglong
AU - Ji, Shouling
AU - Wang, Ting
N1 - Publisher Copyright:
© 2025 by The USENIX Association All Rights Reserved.
PY - 2025
Y1 - 2025
N2 - Thanks to their remarkable denoising capabilities, diffusion models are increasingly being employed as defensive tools to reinforce the robustness of other models, notably in purifying adversarial examples and certifying adversarial robustness. However, the potential risks of these practices remain largely unexplored, which is highly concerning. To bridge this gap, this work investigates the vulnerability of robustness-enhancing diffusion models. Specifically, we demonstrate that these models are highly susceptible to DIFF2, a simple yet effective attack, which substantially diminishes their robustness assurance. Essentially, DIFF2 integrates a malicious diffusion-sampling process into the diffusion model, guiding inputs embedded with specific triggers toward an adversary-defined distribution while preserving the normal functionality for clean inputs. Our case studies on adversarial purification and robustness certification show that DIFF2 can significantly reduce both post-purification and certified accuracy across benchmark datasets and models, highlighting the potential risks of relying on pre-trained diffusion models as defensive tools. We further explore possible countermeasures, suggesting promising avenues for future research.
AB - Thanks to their remarkable denoising capabilities, diffusion models are increasingly being employed as defensive tools to reinforce the robustness of other models, notably in purifying adversarial examples and certifying adversarial robustness. However, the potential risks of these practices remain largely unexplored, which is highly concerning. To bridge this gap, this work investigates the vulnerability of robustness-enhancing diffusion models. Specifically, we demonstrate that these models are highly susceptible to DIFF2, a simple yet effective attack, which substantially diminishes their robustness assurance. Essentially, DIFF2 integrates a malicious diffusion-sampling process into the diffusion model, guiding inputs embedded with specific triggers toward an adversary-defined distribution while preserving the normal functionality for clean inputs. Our case studies on adversarial purification and robustness certification show that DIFF2 can significantly reduce both post-purification and certified accuracy across benchmark datasets and models, highlighting the potential risks of relying on pre-trained diffusion models as defensive tools. We further explore possible countermeasures, suggesting promising avenues for future research.
UR - https://www.scopus.com/pages/publications/105021312149
UR - https://www.scopus.com/pages/publications/105021312149#tab=citedBy
M3 - Conference contribution
AN - SCOPUS:105021312149
T3 - Proceedings of the 34th USENIX Security Symposium
SP - 997
EP - 1016
BT - Proceedings of the 34th USENIX Security Symposium
PB - USENIX Association
T2 - 34th USENIX Security Symposium, USENIX Security 2025
Y2 - 13 August 2025 through 15 August 2025
ER -