TY - GEN
T1 - ASLRing
T2 - 9th ACM/IEEE Conference on Internet-of-Things Design and Implementation, IoTDI 2024
AU - Zhou, Hao
AU - Lu, Taiting
AU - Dehaan, Kenneth
AU - Gowda, Mahanth
N1 - Publisher Copyright:
© 2024 IEEE.
PY - 2024
Y1 - 2024
N2 - Sign Language is widely used by over 500 million Deaf and hard of hearing (DHH) individuals in their daily lives. While prior works made notable efforts to show the feasibility of recognizing signs with various sensing modalities both from the wireless and wearable domains, they recruited sign language learners for validation. Based on our interactions with native sign language users, we found that signal diversity hinders the generalization of users (e.g., users from different backgrounds interpret signs differently, and native users have complex articulated signs), thus resulting in recognition difficulty. While multiple solutions (e.g., increasing diversity of data, harvesting virtual data from sign videos) are possible, we propose ASLRing that addresses the sign language recognition problem from a meta-learning perspective by learning an inherent knowledge about diverse spaces of signs for fast adaptation. ASLRing bypasses expensive data collection process and avoids the limitation of leveraging virtual data from sign videos (e.g., occlusions, overexposure, low-resolution). To validate ASLRing, instead of recruiting learners, we conducted a comprehensive user study with a database with 1080 sentences generated by a vocabulary size of 1057 from 14 native sign language users and achieved a 26.9%
AB - Sign Language is widely used by over 500 million Deaf and hard of hearing (DHH) individuals in their daily lives. While prior works made notable efforts to show the feasibility of recognizing signs with various sensing modalities both from the wireless and wearable domains, they recruited sign language learners for validation. Based on our interactions with native sign language users, we found that signal diversity hinders the generalization of users (e.g., users from different backgrounds interpret signs differently, and native users have complex articulated signs), thus resulting in recognition difficulty. While multiple solutions (e.g., increasing diversity of data, harvesting virtual data from sign videos) are possible, we propose ASLRing that addresses the sign language recognition problem from a meta-learning perspective by learning an inherent knowledge about diverse spaces of signs for fast adaptation. ASLRing bypasses expensive data collection process and avoids the limitation of leveraging virtual data from sign videos (e.g., occlusions, overexposure, low-resolution). To validate ASLRing, instead of recruiting learners, we conducted a comprehensive user study with a database with 1080 sentences generated by a vocabulary size of 1057 from 14 native sign language users and achieved a 26.9%
UR - http://www.scopus.com/inward/record.url?scp=85197798118&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85197798118&partnerID=8YFLogxK
U2 - 10.1109/IoTDI61053.2024.00022
DO - 10.1109/IoTDI61053.2024.00022
M3 - Conference contribution
AN - SCOPUS:85197798118
T3 - Proceedings - 9th ACM/IEEE Conference on Internet-of-Things Design and Implementation, IoTDI 2024
SP - 203
EP - 214
BT - Proceedings - 9th ACM/IEEE Conference on Internet-of-Things Design and Implementation, IoTDI 2024
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 13 May 2024 through 16 May 2024
ER -