TY - GEN
T1 - VRMN-bD
T2 - 31st IEEE Conference on Virtual Reality and 3D User Interfaces, VR 2024
AU - Zhang, He
AU - Li, Xinyang
AU - Sun, Yuanxi
AU - Fu, Xinyi
AU - Qiu, Christine
AU - Carroll, John M.
N1 - Publisher Copyright:
© 2024 IEEE.
PY - 2024
Y1 - 2024
N2 - Understanding and recognizing emotions are important and challenging issues in the metaverse era. Understanding, identifying, and predicting fear, which is one of the fundamental human emotions, in virtual reality (VR) environments plays an essential role in immersive game development, scene development, and next-generation virtual human-computer interaction applications. In this article, we used VR horror games as a medium to analyze fear emotions by collecting multi-modal data (posture, audio, and physiological signals) from 23 players. We used an LSTM-based model to predict fear with accuracies of 65.31% and 90.47% under 6-level classification (no fear and five different levels of fear) and 2-level classification (no fear and fear), respectively. We constructed a multi-modal natural behavior dataset of immersive human fear responses (VRMN-bD) and compared it with existing relevant advanced datasets. The results show that our dataset has fewer limitations in terms of collection method, data scale and audience scope. We are unique and advanced in targeting multi-modal datasets of fear and behavior in VR stand-up interactive environments. Moreover, we discussed the implications of this work for communities and applications. The dataset and pre-trained model are available at https://github.com/KindOPSTAR/VRMN-bD.
AB - Understanding and recognizing emotions are important and challenging issues in the metaverse era. Understanding, identifying, and predicting fear, which is one of the fundamental human emotions, in virtual reality (VR) environments plays an essential role in immersive game development, scene development, and next-generation virtual human-computer interaction applications. In this article, we used VR horror games as a medium to analyze fear emotions by collecting multi-modal data (posture, audio, and physiological signals) from 23 players. We used an LSTM-based model to predict fear with accuracies of 65.31% and 90.47% under 6-level classification (no fear and five different levels of fear) and 2-level classification (no fear and fear), respectively. We constructed a multi-modal natural behavior dataset of immersive human fear responses (VRMN-bD) and compared it with existing relevant advanced datasets. The results show that our dataset has fewer limitations in terms of collection method, data scale and audience scope. We are unique and advanced in targeting multi-modal datasets of fear and behavior in VR stand-up interactive environments. Moreover, we discussed the implications of this work for communities and applications. The dataset and pre-trained model are available at https://github.com/KindOPSTAR/VRMN-bD.
UR - http://www.scopus.com/inward/record.url?scp=85191446908&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85191446908&partnerID=8YFLogxK
U2 - 10.1109/VR58804.2024.00054
DO - 10.1109/VR58804.2024.00054
M3 - Conference contribution
AN - SCOPUS:85191446908
T3 - Proceedings - 2024 IEEE Conference on Virtual Reality and 3D User Interfaces, VR 2024
SP - 320
EP - 330
BT - Proceedings - 2024 IEEE Conference on Virtual Reality and 3D User Interfaces, VR 2024
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 16 March 2024 through 21 March 2024
ER -