TY - GEN
T1 - Get the FACS fast
T2 - 2009 3rd International Conference on Affective Computing and Intelligent Interaction and Workshops, ACII 2009
AU - Brick, Timothy R.
AU - Hunter, Michael D.
AU - Cohn, Jeffrey F.
PY - 2009
Y1 - 2009
N2 - Much progress has been made in automated facial image analysis, yet current approaches still lag behind what is possible using manual labeling of facial actions. While many factors may contribute, a key one may be the limited attention to dynamics of facial action. Most approaches classify frames in terms of either displacement from a neutral, mean face or, less frequently, displacement between successive frames (i.e. velocity). In the current paper, we evaluated the hypothesis that attention to dynamics can boost recognition rates. Using the well-known Cohn-Kanade database and support vector machines, adding velocity and acceleration decreased the number of incorrectly classified results by 14.2% and 11.2%, respectively. Average classification accuracy for the displacement and velocity classifier system across all classifiers was 90.2%. Findings were replicated using linear discriminant analysis, and found a mean decrease of 16.4% in incorrect classifications across classifiers. These findings suggest that information about the dynamics of a movement, that is, the velocity and to a lesser extent the acceleration of a change, can helpfully inform classification of facial expressions.
AB - Much progress has been made in automated facial image analysis, yet current approaches still lag behind what is possible using manual labeling of facial actions. While many factors may contribute, a key one may be the limited attention to dynamics of facial action. Most approaches classify frames in terms of either displacement from a neutral, mean face or, less frequently, displacement between successive frames (i.e. velocity). In the current paper, we evaluated the hypothesis that attention to dynamics can boost recognition rates. Using the well-known Cohn-Kanade database and support vector machines, adding velocity and acceleration decreased the number of incorrectly classified results by 14.2% and 11.2%, respectively. Average classification accuracy for the displacement and velocity classifier system across all classifiers was 90.2%. Findings were replicated using linear discriminant analysis, and found a mean decrease of 16.4% in incorrect classifications across classifiers. These findings suggest that information about the dynamics of a movement, that is, the velocity and to a lesser extent the acceleration of a change, can helpfully inform classification of facial expressions.
UR - http://www.scopus.com/inward/record.url?scp=77949346003&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=77949346003&partnerID=8YFLogxK
U2 - 10.1109/ACII.2009.5349600
DO - 10.1109/ACII.2009.5349600
M3 - Conference contribution
AN - SCOPUS:77949346003
SN - 9781424447992
T3 - Proceedings - 2009 3rd International Conference on Affective Computing and Intelligent Interaction and Workshops, ACII 2009
BT - Proceedings - 2009 3rd International Conference on Affective Computing and Intelligent Interaction and Workshops, ACII 2009
Y2 - 10 September 2009 through 12 September 2009
ER -