TY - JOUR
T1 - DeepMV
T2 - Multi-view deep learning for device-free human activity recognition
AU - Xue, Hongfei
AU - Jiang, Wenjun
AU - Miao, Chenglin
AU - Ma, Fenglong
AU - Wang, Shiyang
AU - Yuan, Ye
AU - Yao, Shuochao
AU - Zhang, Aidong
AU - Su, Lu
N1 - Publisher Copyright:
© 2020 Association for Computing Machinery.
PY - 2020/3/18
Y1 - 2020/3/18
N2 - Recently, significant efforts are made to explore device-free human activity recognition techniques that utilize the information collected by existing indoor wireless infrastructures without the need for the monitored subject to carry a dedicated device. Most of the existing work, however, focuses their attention on the analysis of the signal received by a single device. In practice, there are usually multiple devices "observing" the same subject. Each of these devices can be regarded as an information source and provides us an unique "view" of the observed subject. Intuitively, if we can combine the complementary information carried by the multiple views, we will be able to improve the activity recognition accuracy. Towards this end, we propose DeepMV, a unified multi-view deep learning framework, to learn informative representations of heterogeneous device-free data. DeepMV can combine different views' information weighted by the quality of their data and extract commonness shared across different environments to improve the recognition performance. To evaluate the proposed DeepMV model, we set up a testbed using commercialized WiFi and acoustic devices. Experiment results show that DeepMV can effectively recognize activities and outperform the state-of-the-art human activity recognition methods.
AB - Recently, significant efforts are made to explore device-free human activity recognition techniques that utilize the information collected by existing indoor wireless infrastructures without the need for the monitored subject to carry a dedicated device. Most of the existing work, however, focuses their attention on the analysis of the signal received by a single device. In practice, there are usually multiple devices "observing" the same subject. Each of these devices can be regarded as an information source and provides us an unique "view" of the observed subject. Intuitively, if we can combine the complementary information carried by the multiple views, we will be able to improve the activity recognition accuracy. Towards this end, we propose DeepMV, a unified multi-view deep learning framework, to learn informative representations of heterogeneous device-free data. DeepMV can combine different views' information weighted by the quality of their data and extract commonness shared across different environments to improve the recognition performance. To evaluate the proposed DeepMV model, we set up a testbed using commercialized WiFi and acoustic devices. Experiment results show that DeepMV can effectively recognize activities and outperform the state-of-the-art human activity recognition methods.
UR - http://www.scopus.com/inward/record.url?scp=85089758458&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85089758458&partnerID=8YFLogxK
U2 - 10.1145/3380980
DO - 10.1145/3380980
M3 - Article
AN - SCOPUS:85089758458
SN - 2474-9567
VL - 4
JO - Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies
JF - Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies
IS - 1
M1 - 3380980
ER -