TY - GEN
T1 - VideoMec
T2 - 16th ACM/IEEE International Conference on Information Processing in Sensor Networks, IPSN 2017
AU - Wu, Yibo
AU - Cao, Guohong
N1 - Funding Information:
We would like to thank our shepherd Dr. Tian He and the anonymous reviewers for their insightful comments and helpful suggestions. This work was supported in part by the National Science Foundation (NSF) under grant CNS-1526425 and CNS-1421578.
Publisher Copyright:
© 2017 ACM.
PY - 2017/4/18
Y1 - 2017/4/18
N2 - The exponential growth of mobile videos has enabled a variety of video crowdsourcing applications. However, existing crowd-sourcing approaches require all video files to be uploaded, wasting a large amount of bandwidth since not all crowdsourced videos are useful. Moreover, it is difficult for applications to find desired videos based on user-generated annotations, which can be inaccurate or miss important information. To address these issues, we present VideoMec, a video crowdsourcing system that automatically generates video descriptions based on various geographical and geometrical information, called metadata, from multiple embedded sensors in off-the-shelf mobile devices. With VideoMec, only a small amount of metadata needs to be uploaded to the server, hence reducing the bandwidth and energy consumption of mobile devices. Based on the uploaded metadata, VideoMec supports comprehensive queries for applications to find and fetch desired videos. For time-sensitive applications, it may not be possible to upload all desired videos in time due to limited wireless bandwidth and large video files. Thus, we formalize two optimization problems and propose efficient algorithms to select the most important videos to upload under bandwidth and time constraints. We have implemented a prototype of VideoMec, evaluated its performance, and demonstrated its effectiveness based on real experiments.
AB - The exponential growth of mobile videos has enabled a variety of video crowdsourcing applications. However, existing crowd-sourcing approaches require all video files to be uploaded, wasting a large amount of bandwidth since not all crowdsourced videos are useful. Moreover, it is difficult for applications to find desired videos based on user-generated annotations, which can be inaccurate or miss important information. To address these issues, we present VideoMec, a video crowdsourcing system that automatically generates video descriptions based on various geographical and geometrical information, called metadata, from multiple embedded sensors in off-the-shelf mobile devices. With VideoMec, only a small amount of metadata needs to be uploaded to the server, hence reducing the bandwidth and energy consumption of mobile devices. Based on the uploaded metadata, VideoMec supports comprehensive queries for applications to find and fetch desired videos. For time-sensitive applications, it may not be possible to upload all desired videos in time due to limited wireless bandwidth and large video files. Thus, we formalize two optimization problems and propose efficient algorithms to select the most important videos to upload under bandwidth and time constraints. We have implemented a prototype of VideoMec, evaluated its performance, and demonstrated its effectiveness based on real experiments.
UR - http://www.scopus.com/inward/record.url?scp=85019012541&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85019012541&partnerID=8YFLogxK
U2 - 10.1145/3055031.3055089
DO - 10.1145/3055031.3055089
M3 - Conference contribution
AN - SCOPUS:85019012541
T3 - Proceedings - 2017 16th ACM/IEEE International Conference on Information Processing in Sensor Networks, IPSN 2017
SP - 143
EP - 154
BT - Proceedings - 2017 16th ACM/IEEE International Conference on Information Processing in Sensor Networks, IPSN 2017
PB - Association for Computing Machinery, Inc
Y2 - 18 April 2017 through 20 April 2017
ER -