Likelihood map fusion for visual object tracking

Zhaozheng Yin, Fatih Porikli, Robert Collins

Research output: Chapter in Book/Report/Conference proceedingConference contribution

47 Scopus citations

Abstract

Visual object tracking can be considered as a figure-ground classification task. In this paper, different features are used to generate a set of likelihood maps for each pixel indicating the probability of that pixel belonging to foreground object or scene background. For example, intensity, texture, motion, saliency and template matching can all be used to generate likelihood maps. We propose a generic likelihood map fusion framework to combine these heterogeneous features into a fused soft segmentation suitable for mean-shift tracking. All the component likelihood maps contribute to the segmentation based on their classification confidence scores (weights) learned from the previous frame. The evidence combination framework dynamically updates the weights such that, in the fused likelihood map, discriminative foreground/background information is preserved while ambiguous information is suppressed. The framework is applied here to track ground vehicles from thermal airborne video, and is also compared to other state-of-the-art algorithms.

Original languageEnglish (US)
Title of host publication2008 IEEE Workshop on Applications of Computer Vision, WACV
DOIs
StatePublished - 2008
Event2008 IEEE Workshop on Applications of Computer Vision, WACV - Copper Mountain, CO, United States
Duration: Jan 7 2008Jan 9 2008

Publication series

Name2008 IEEE Workshop on Applications of Computer Vision, WACV

Other

Other2008 IEEE Workshop on Applications of Computer Vision, WACV
Country/TerritoryUnited States
CityCopper Mountain, CO
Period1/7/081/9/08

All Science Journal Classification (ASJC) codes

  • Computer Vision and Pattern Recognition
  • Computer Science Applications

Fingerprint

Dive into the research topics of 'Likelihood map fusion for visual object tracking'. Together they form a unique fingerprint.

Cite this