Achieving full-view coverage in camera sensor networks

Yi Wang, Guohong Cao

Research output: Contribution to journalArticlepeer-review

41 Scopus citations

Abstract

Camera sensors are different from traditional scalar sensors, as cameras at different positions can form very different views of the object. However, traditional coverage model does not consider this intrinsic property of camera sensors. To address this issue, a novel model called full-view coverage is proposed. It uses the angle between the object's facing direction and the camera's viewing direction to measure the quality of coverage. An object is full-view covered if there is always a camera to cover it no matter which direction it faces and the camera's viewing direction is sufficiently close to the object's facing direction. An efficient method is proposed for full-view coverage detection in any given camera sensor networks, and a sufficient condition on the sensor density needed for full-view coverage in a random uniform deployment is derived. In addition, the article shows a necessary and sufficient condition on the sensor density for full-view coverage in a triangular latticebased deployment. Based on the full-view coverage model, the article further studies the barrier coverage problem. Existing weak and strong barrier coverage models are extended by considering direction issues in camera sensor networks. With these new models, weak/strong barrier coverage verification problems are introduced, and new detection methods are proposed and evaluated.

Original languageEnglish (US)
Article number3
JournalACM Transactions on Sensor Networks
Volume10
Issue number1
DOIs
StatePublished - Nov 2013

All Science Journal Classification (ASJC) codes

  • Computer Networks and Communications

Fingerprint

Dive into the research topics of 'Achieving full-view coverage in camera sensor networks'. Together they form a unique fingerprint.

Cite this