BIC-Based Mixture Model Defense Against Data Poisoning Attacks on Classifiers: A Comprehensive Study

Xi Li, David J. Miller, Zhen Xiang, George Kesidis

Research output: Contribution to journalArticlepeer-review


Data Poisoning (DP) is an effective attack that causes trained classifiers to misclassify their inputs. DP attacks significantly degrade a classifier&#x0027;s accuracy by covertly injecting attack samples into the training set. Broadly applicable to different classifier structures, without strong assumptions about the attacker, an <italic>unsupervised</italic> Bayesian Information Criterion (BIC)-based mixture model defense against &#x201C;error generic&#x201D; DP attacks is herein proposed that: 1) addresses the most challenging <italic>embedded</italic> DP scenario wherein, if DP is present, the poisoned samples are an <italic>a priori</italic> unknown subset of the training set, and with no clean validation set available; 2) applies a mixture model both to well-fit potentially multi-modal class distributions and to capture poisoned samples within a small subset of the mixture components; 3) jointly identifies poisoned components and samples by minimizing the BIC cost defined over the whole training set, with the identified poisoned data removed prior to classifier training. Our experimental results, for various classifier structures and benchmark datasets, demonstrate the effectiveness of our defense under strong DP attacks, as well as its superiority over other DP defenses.

Original languageEnglish (US)
Pages (from-to)1-14
Number of pages14
JournalIEEE Transactions on Knowledge and Data Engineering
StateAccepted/In press - 2024

All Science Journal Classification (ASJC) codes

  • Information Systems
  • Computer Science Applications
  • Computational Theory and Mathematics

Cite this