BIC-Based Mixture Model Defense Against Data Poisoning Attacks on Classifiers: A Comprehensive Study

Xi Li, David J. Miller, Zhen Xiang, George Kesidis

Research output: Contribution to journalArticlepeer-review

Abstract

Data Poisoning (DP) is an effective attack that causes trained classifiers to misclassify their inputs. DP attacks significantly degrade a classifier's accuracy by covertly injecting attack samples into the training set. Broadly applicable to different classifier structures, without strong assumptions about the attacker, an unsupervised Bayesian Information Criterion (BIC)-based mixture model defense against 'error generic' DP attacks is herein proposed that: 1) addresses the most challenging embedded DP scenario wherein, if DP is present, the poisoned samples are an a priori unknown subset of the training set, and with no clean validation set available; 2) applies a mixture model both to well-fit potentially multi-modal class distributions and to capture poisoned samples within a small subset of the mixture components; and 3) jointly identifies poisoned components and samples by minimizing the BIC cost defined over the whole training set, with the identified poisoned data removed prior to classifier training. Our experimental results, for various classifier structures and benchmark datasets, demonstrate the effectiveness of our defense under strong DP attacks, as well as its superiority over other DP defenses.

Original languageEnglish (US)
Pages (from-to)3697-3711
Number of pages15
JournalIEEE Transactions on Knowledge and Data Engineering
Volume36
Issue number8
DOIs
StatePublished - 2024

All Science Journal Classification (ASJC) codes

  • Information Systems
  • Computer Science Applications
  • Computational Theory and Mathematics

Cite this