TY - JOUR
T1 - HICEM
T2 - A High-Coverage Emotion Model for Artificial Emotional Intelligence
AU - Wortman, Benjamin
AU - Wang, James Z.
N1 - Publisher Copyright:
© 2010-2012 IEEE.
PY - 2024
Y1 - 2024
N2 - As social robots and other intelligent machines enter the home, artificial emotional intelligence (AEI) is taking center stage to address users' desire for deeper, more meaningful human-machine interaction. To accomplish such efficacious interaction, the next-generation AEI needs comprehensive human emotion models for training. Unlike theory of emotion, which has been the historical focus in psychology, emotion models are a descriptive tool. In practice, the strongest models need robust coverage, which means defining the smallest core set of emotions from which all others can be derived. To achieve the desired coverage, we turn to word embeddings from natural language processing. Using unsupervised clustering techniques, our experiments show that with as few as 15 discrete emotion categories, we can provide maximum coverage across six major languages-Arabic, Chinese, English, French, Spanish, and Russian. In support of our findings, we also examine annotations from two large-scale emotion recognition datasets to assess the validity of existing emotion models compared to human perception at scale. Because robust, comprehensive emotion models are foundational for developing real-world affective computing applications, this work has broad implications in social robotics, human-machine interaction, mental healthcare, computational psychology, and entertainment.
AB - As social robots and other intelligent machines enter the home, artificial emotional intelligence (AEI) is taking center stage to address users' desire for deeper, more meaningful human-machine interaction. To accomplish such efficacious interaction, the next-generation AEI needs comprehensive human emotion models for training. Unlike theory of emotion, which has been the historical focus in psychology, emotion models are a descriptive tool. In practice, the strongest models need robust coverage, which means defining the smallest core set of emotions from which all others can be derived. To achieve the desired coverage, we turn to word embeddings from natural language processing. Using unsupervised clustering techniques, our experiments show that with as few as 15 discrete emotion categories, we can provide maximum coverage across six major languages-Arabic, Chinese, English, French, Spanish, and Russian. In support of our findings, we also examine annotations from two large-scale emotion recognition datasets to assess the validity of existing emotion models compared to human perception at scale. Because robust, comprehensive emotion models are foundational for developing real-world affective computing applications, this work has broad implications in social robotics, human-machine interaction, mental healthcare, computational psychology, and entertainment.
UR - http://www.scopus.com/inward/record.url?scp=85166323474&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85166323474&partnerID=8YFLogxK
U2 - 10.1109/TAFFC.2023.3324902
DO - 10.1109/TAFFC.2023.3324902
M3 - Article
AN - SCOPUS:85166323474
SN - 1949-3045
VL - 15
SP - 1136
EP - 1152
JO - IEEE Transactions on Affective Computing
JF - IEEE Transactions on Affective Computing
IS - 3
ER -