Crowd counting with limited labeling through submodular frame selection

Qi Zhou, Junping Zhang, Lingfu Che, Hongming Shan, James Z. Wang

Research output: Contribution to journalArticlepeer-review

18 Scopus citations


Automated crowd counting is valuable for intelligent transportation systems, as it can help to improve the emergency planning and prevent congestion in transit hubs such as train stations and airports. Semi-supervised crowd counting aims to estimate the number of pedestrians in an ongoing scene using a combination of a small number of labeled frames and a large number of unlabeled ones. However, existing methods do not incorporate ways to effectively select informative frames as labeled training samples, resulting in low accuracy on unseen crowd scenes. We propose a submodular method to select the most informative frames from the image sequences of crowds. Specifically, the method selects the most representative images to guarantee the information coverage, by maximizing the similarities between the group of selected images and the image sequence. In addition, these frames are chosen to avoid redundancies and preserve diversity. Finally, our semi-supervised method incorporates graph Laplacian regularization and spatiotemporal constraints. Extensive experiments on three benchmark data sets demonstrate that our proposed approach achieves higher accuracy compared with the state-of-the-art regression methods and competitive performance with deep convolutional models, especially when the number of labeled data is exceptionally small.

Original languageEnglish (US)
Article number8360780
Pages (from-to)1728-1738
Number of pages11
JournalIEEE Transactions on Intelligent Transportation Systems
Issue number5
StatePublished - May 2019

All Science Journal Classification (ASJC) codes

  • Automotive Engineering
  • Mechanical Engineering
  • Computer Science Applications


Dive into the research topics of 'Crowd counting with limited labeling through submodular frame selection'. Together they form a unique fingerprint.

Cite this