Abstract
Facial expressions serve as a crucial facet of human behavior, offering a wealth of social and emotional cues. Despite their significance, achieving real-time, accurate, and interpretable recognition of facial expressions from multimedia content has posed a considerable challenge for computer systems. In an effort to address these concerns, we present a novel sparse tagging-like methodology to jointly learn Action Units (AUs) and facial expressions. Our approach regards the process of AU combination recognition as image tagging, thereby significantly reducing computational complexity through the exclusive use of matrix multiplications. To enhance the interpretability of our analysis, we incorporate a sparse term into the methodology, promoting the sparseness of AU combinations. An evaluation of our proposed technique across five benchmark datasets reveals its superiority in terms of speed, interpretability, and robustness compared to existing algorithms, while maintaining commensurate levels of accuracy. This refined approach represents a significant advancement in the field of facial expression recognition and analysis, offering a more efficient solution for real-time applications.
Original language | English (US) |
---|---|
Title of host publication | Modeling Visual Aesthetics, Emotion, and Artistic Style |
Publisher | Springer International Publishing |
Pages | 105-126 |
Number of pages | 22 |
ISBN (Electronic) | 9783031502699 |
ISBN (Print) | 9783031502682 |
DOIs | |
State | Published - Jan 1 2024 |
All Science Journal Classification (ASJC) codes
- General Computer Science
- General Mathematics
- General Arts and Humanities
- General Psychology
- General Social Sciences