Current video-based trackers can track robustly over thousands of video frames. This work seeks to develop trackers that operate over two orders of magnitude more time (hours). Such long-term object tracking must be resilient to large changes in appearance of both the object and the surrounding environment. This requires raising the level of abstraction at which the tracker represents its target -- the goal must be tracking 'objects', not image templates or distributions of color. The intellectual merit of this effort is to achieve persistent object tracking through novel research that spans the areas of on-line feature selection, foreground/background segmentation, and object model learning and recognition. Flexible appearance-based object descriptors are developed that automatically adapt to changes in object and background appearance. Shape-constrained figure/ground segmentation is performed to avoid model drift during adaptation. Object models are learned on-the-fly during tracking and used to search for and recognize the same object again after occlusion or tracking failure. Development of this technology for persistent object tracking has broad impact in commercial applications such as traffic monitoring, motion capture and automated surveillance, as well as law enforcement and military applications in trailing suspects and combatants. This project promotes scientific repeatability, code-sharing and dissemination of results by maintaining a tracking evaluation web site that provides open source tracking code, benchmark datasets, and a mechanism for online evaluation.
|Effective start/end date||8/1/05 → 7/31/09|
- National Science Foundation: $301,506.00