Learning from algorithm-generated pseudo-annotations for detecting ants in videos

Yizhe Zhang, Natalie Imirzian, Christoph Kurze, Hao Zheng, David P. Hughes, Danny Z. Chen

Research output: Contribution to journalArticlepeer-review

1 Scopus citations

Abstract

Deep learning (DL) based detection models are powerful tools for large-scale analysis of dynamic biological behaviors in video data. Supervised training of a DL detection model often requires a large amount of manually-labeled training data which are time-consuming and labor-intensive to acquire. In this paper, we propose LFAGPA (Learn From Algorithm-Generated Pseudo-Annotations) that utilizes (noisy) annotations which are automatically generated by algorithms to train DL models for ant detection in videos. Our method consists of two main steps: (1) generate foreground objects using a (set of) state-of-the-art foreground extraction algorithm(s); (2) treat the results from step (1) as pseudo-annotations and use them to train deep neural networks for ant detection. We tackle several challenges on how to make use of automatically generated noisy annotations, how to learn from multiple annotation resources, and how to combine algorithm-generated annotations with human-labeled annotations (when available) for this learning framework. In experiments, we evaluate our method using 82 videos (totally 20,348 image frames) captured under natural conditions in a tropical rain-forest for dynamic ant behavior study. Without any manual annotation cost but only algorithm-generated annotations, our method can achieve a decent detection performance (77% in F1 score). Moreover, when using only 10% manual annotations, our method can train a DL model to perform as well as using the full human annotations (81% in F1 score).

Original languageEnglish (US)
Article number11566
JournalScientific reports
Volume13
Issue number1
DOIs
StatePublished - Dec 2023

All Science Journal Classification (ASJC) codes

  • General

Cite this