Deep image synthesis from intuitive user input: A review and perspectives

Yuan Xue, Yuan Chen Guo, Han Zhang, Tao Xu, Song Hai Zhang, Xiaolei Huang

Research output: Contribution to journalReview articlepeer-review

19 Scopus citations


In many applications of computer graphics, art, and design, it is desirable for a user to provide intuitive non-image input, such as text, sketch, stroke, graph, or layout, and have a computer system automatically generate photo-realistic images according to that input. While classically, works that allow such automatic image content generation have followed a framework of image retrieval and composition, recent advances in deep generative models such as generative adversarial networks (GANs), variational autoencoders (VAEs), and flow-based methods have enabled more powerful and versatile image generation approaches. This paper reviews recent works for image synthesis given intuitive user input, covering advances in input versatility, image generation methodology, benchmark datasets, and evaluation metrics. This motivates new perspectives on input representation and interactivity, cross fertilization between major image generation paradigms, and evaluation and comparison of generation methods.

Original languageEnglish (US)
Pages (from-to)3-31
Number of pages29
JournalComputational Visual Media
Issue number1
StatePublished - Mar 2022

All Science Journal Classification (ASJC) codes

  • Computer Vision and Pattern Recognition
  • Computer Graphics and Computer-Aided Design
  • Artificial Intelligence


Dive into the research topics of 'Deep image synthesis from intuitive user input: A review and perspectives'. Together they form a unique fingerprint.

Cite this