TY - GEN
T1 - Barrier coverage in camera sensor networks
AU - Wang, Yi
AU - Cao, Guohong
PY - 2011
Y1 - 2011
N2 - Barrier coverage has attracted much attention in the past few years. However, most of the previous works focused on traditional scalar sensors. We propose to study barrier coverage in camera sensor networks. One fundamental difference between camera and scalar sensor is that cameras from different positions can form quite different views of the object. As a result, simply combining the sensing range of the cameras across the field does not necessarily form an effective camera barrier since the face image (or the interested aspect) of the object may be missed. To address this problem, we use the angle between the object's facing direction and the camera's viewing direction to measure the quality of sensing. An object is full-view covered if there is always a camera to cover it no matter which direction it faces and the camera's viewing direction is sufficiently close to the object's facing direction. We study the problem of constructing a camera barrier, which is essentially a connected zone across the monitored field such that every point within this zone is full-view covered. We propose a novel method to select camera sensors from an arbitrary deployment to form a camera barrier, and present redundancy reduction techniques to effectively reduce the number of cameras used. We also present techniques to deploy cameras for barrier coverage in a deterministic environment, and analyze and optimize the number of cameras required for this specific deployment under various parameters.
AB - Barrier coverage has attracted much attention in the past few years. However, most of the previous works focused on traditional scalar sensors. We propose to study barrier coverage in camera sensor networks. One fundamental difference between camera and scalar sensor is that cameras from different positions can form quite different views of the object. As a result, simply combining the sensing range of the cameras across the field does not necessarily form an effective camera barrier since the face image (or the interested aspect) of the object may be missed. To address this problem, we use the angle between the object's facing direction and the camera's viewing direction to measure the quality of sensing. An object is full-view covered if there is always a camera to cover it no matter which direction it faces and the camera's viewing direction is sufficiently close to the object's facing direction. We study the problem of constructing a camera barrier, which is essentially a connected zone across the monitored field such that every point within this zone is full-view covered. We propose a novel method to select camera sensors from an arbitrary deployment to form a camera barrier, and present redundancy reduction techniques to effectively reduce the number of cameras used. We also present techniques to deploy cameras for barrier coverage in a deterministic environment, and analyze and optimize the number of cameras required for this specific deployment under various parameters.
UR - http://www.scopus.com/inward/record.url?scp=84863120227&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84863120227&partnerID=8YFLogxK
U2 - 10.1145/2107502.2107518
DO - 10.1145/2107502.2107518
M3 - Conference contribution
AN - SCOPUS:84863120227
SN - 9781450307222
T3 - Proceedings of the International Symposium on Mobile Ad Hoc Networking and Computing (MobiHoc)
BT - Proceedings of the 12th ACM International Symposium on Mobile Ad Hoc Networking and Computing, MobiHoc'11
T2 - 12th ACM International Symposium on Mobile Ad Hoc Networking and Computing, MobiHoc'11
Y2 - 17 May 2011 through 19 May 2011
ER -