GreedyFool: Multi-factor imperceptibility and its application to designing a black-box adversarial attack

Hui Liu, Bo Zhao, Minzhi Ji, Mengchen Li, Peng Liu

Research output: Contribution to journalArticlepeer-review

5 Scopus citations


Adversarial examples are well-designed input samples that include perturbations that are imperceptible to the human eye but easily mislead the output of deep neural networks (DNNs). Existing studies synthesize adversarial examples by leveraging simple metrics to penalize perturbations that lack sufficient consideration of the human visual system (HVS), producing noticeable artifacts. To explore why these perturbations are visible,four primary factors that affect the perceptibility of human eyes are summarized in this paper. Based on this investigation, we design a multi-factor metric MulFactorLoss for measuring the perceptual loss between benign examples and adversarial examples. To test the imperceptibility of the multi-factor metric, we propose a novel black-box adversarial attack that is referred to as GreedyFool. GreedyFool applies differential evolution to evaluate the effects of perturbed pixels on the confidence of a target DNN and introduces greedy approximation to automatically generate adversarial perturbations. We conduct extensive experiments on the ImageNet and CIFRA-10 datasets and a comprehensive user study with 60 participants. The experimental results demonstrate that MulFactorLoss is a more imperceptible metric than the existing pixelwise metrics, and GreedyFool achieves a 100% success rate in a black-box manner.

Original languageEnglish (US)
Pages (from-to)717-730
Number of pages14
JournalInformation Sciences
StatePublished - Oct 2022

All Science Journal Classification (ASJC) codes

  • Theoretical Computer Science
  • Software
  • Control and Systems Engineering
  • Computer Science Applications
  • Information Systems and Management
  • Artificial Intelligence


Dive into the research topics of 'GreedyFool: Multi-factor imperceptibility and its application to designing a black-box adversarial attack'. Together they form a unique fingerprint.

Cite this