Spontaneous Facial Behavior Analysis:

Long term continuous analysis of facial expressions and micro-expressions

Workshop to be held in conjunction with ECCV 2014, Zurich, September 6

 
Overview Dates Submission Programme Invited Speakers Organizers PC Members
             
Special issue of Computer Vision and Image Understanding on Spontaneous Facial Behaviour Analysis (SFBA)

Submission dealine: February 27th 2015.

Overview

Face is not only one of the most cogent, naturally pre-eminent means used by human beings for the recognition of a person, but also for communicating emotions and intentions and in regulating interactions with the environment and other persons in the vicinity. It has been estimated that facial non-verbal behavior of the speaker, manifested by expressions, contributes more than half to the effect of the spoken message which is more than the verbal part and the vocal part. Hence, facial expressions have a key role in verbal and non-verbal communication. Furthermore, according to Psychological studies important cues for certain behaviors, such as deception and stress, are micro-expressions, since they often represent leakage to behavior control.

Micro-expressions which are very rapid and subtle involuntary facial expressions, occur when an emotion is of lower intensity, and are much more difficult to read and fake. Moreover, changing facial expressions is not only a natural and powerful way of conveying personal intention, expressing emotion and regulating interpersonal communication but an important cue of personality. Automatic recognition of expressions and estimation of their intensity is an important step in enhancing the capability of human-machine/robot interfaces. The goal of this workshop is to provide an interdisciplinary forum and foster discussion of psychological and computer vision up-to-date approaches to spontaneous expression and micro-expression analysis in a more focused manner, to provide a forum for the dissemination of significant research work and innovative practice, and to encourage exchanges, interactions and possible collaboration between participants.


Dates
Paper submission deadline: June 27th, 2014
Authors Notification: July 27th, 2014
Final Papers due to: August 11th, 2014

 

Submission
This workshop aims to solicit those research contributions related to expression and micro-expressions analysis from continuous videos and their applications. Submissions that address real-world applications are especially encouraged. Tentative topics of interest include, but are not limited to:
  • Spontaneous expression and micro-expression databases: collection and annotation
  • Micro-expression detection, recognition and understanding
  • Long term spontaneous expression analysis for behavior understanding
  • Long-term expression analysis for personality assessment
  • Intensity estimation for continuous expression analysis
  • Role of multimodality in emotion understanding
  • Applications of spontaneous expression and micro-expression analysis
Papers must be submitted online through the [https://cmt.research.microsoft.com/SFBA2014/Default.aspx] CMT submission system and will be peer reviewed by at least three reviewers. Submissions should adhere to the main ECCV 2014 proceedings style, and have a maximum length of 10 pages. Papers accepted and presented at the workshop will be published in the ECCV 2014 conference proceedings.

 

Programme
8:30-8:40 Opening words
8:40-9:40 Invited talk 1: Predicting decisions and intentions from spontaneous facial expressions by Prof. Marian Bartlett
9:40-10:10 Oral 1: Statistically Learned Deformable Eye Models. Joan Alabort-i-Medina, Stefanos Zafeiriou, and Bingqing Qu. (Imperial College London, UK).
10:10-10:30 Break
10:30-11:30 Invited talk 2: Facial behaviour in communication (tentative) by Prof. Richard Bowden
11:30-12:00 Oral 2: Quantifying Micro-expressions with Constraint Local Model and Local Binary Pattern. Wen-Jing Yan,Su-jing Wang,Yu-Hsin Chen,Guoying Zhao, Xiaolan Fu. (Chinese Academy of Sciences and University of Oulu, Finland)
12:00-14:00 Lunch
14:00-14:50 Invited talk 3: Automated Face Analysis for Affective Computing by  Prof. Jeffrey Cohn
14:50-15:20 Oral 3: Audiovisual Conflict Detection in Political Debates. Yannis Panagakis, Stefanos Zafeiriou, and Maja Pantic. (Imperial College London, UK).
15:20-15:40 Break
15:40-16:40 Invited talk 4: Computational Face (tentative) by Prof. Fernando De la Torre
16:40-17:10 Oral 4: Analysing user visual implicit feedback in enhanced TV scenarios. Ioan Marius BILASCO, Adel Lablack, and Taner Danisman. (Université Lille 1, France).
17:10-17:40 Oral 5: Micro-expression Recognition using Robust Principal Component Analysis and Local Spatiotemporal Directional Features. Sujing Wang, Wen-Jing Yan,Guoying Zhao, and Xiaolan Fu. (Chinese Academy of Sciences and University of Oulu, Finland)

 

Invited Speakers (Tentative)
Marian S. Bartlett, University of California, San Diego, and Emotient Inc, USA

Title of the talk: Predicting decisions and intentions from spontaneous facial expressions (tentative)

Abstract:

Spontaneous facial expressions contain information that can reveal our decisions and intentions.  In this talk, I will describe three recent studies from my lab using computer vision and machine learning to predict decisions, intentions, and even the ability to perceive deception in others. First is a study on the detection of faked versus genuine facial expressions of pain.  Real and faked expressions of pain tend to involve the same facial muscles, but they are driven by different neuro-motor systems that differ in their dynamics.  Hence much of the signal for differentiating real from faked pain is in the dynamics.  I will describe methods for analyzing the facial expression time series to extract these differences in dynamics.  The resulting system predicts faked pain better than human observers. Next I will describe a study to predict the ability to detect faked pain in others by measuring spontaneous mimicry. Spontaneous mimicry is the tendency to contract your own facial muscles to match the facial expression of others, and has been associated with the ability to understand others’ feelings.  We show that facial mimicry correlates with the ability to detect when pain is faked or real. Lastly, I will review a new line of research studying facial behavior in neuroeconomics.  These studies find that we can predict from the face when a financial offer is considered to be low.  Moreover, we can predict when the low offer will be accepted or rejected better than human observers can.  The timescale of facial signals turns out to be important.  The predictive signals are at time scales of less than 1 second, whereas the human observers decisions are related to facial signals on longer timescales of 1-2 seconds.

 

Richard Bowden, University of Surrey, UK
Title of the talk: Facial behaviour in communication (tentative)

Abstract (tentative):

Facial expressions and other non verbal cues form a major part of human to human communication and yet in many cases, their exact role and the rules which govern them are still the subject of ongoing linguistic research. While automatic recognition of classical Ekman expressions is a well studied area, its use in areas such as Sign Language recognition and machine translation is still in its infancy. However, we are now reaching the point where tracking, feature extraction and machine learning has given us the tools to investigate the more subtle role expression has to play in communication. This talk will discuss recent developments in tracking: covering linear predictor tracking, nonlinear predictors and linear cascades which can provide accurate person dependant and independent estimation of facial features over pose. We will discuss the effect pose has on recognition of expression and how classification can be extended to non verbal cues other than emotion. By looking for implicit rules within the data these can be used to drive plausible animation from audio or drive social models of interaction. We will discuss recent developments in Lip-reading: learning spatiotemporal patters that can be used to spot isolated utterances of words on the lips or continuous recognition when combined with tools from speech recognition. Finally recent work into recognising signer independent mouthing's in continuous sign will be presented, providing dedicated viseme recognition in the context of sign language recognition.

 

Jeffrey Cohn, University of Pittsburgh, USA

Title of the talk: Automated Face Analysis for Affective Computing (tentative)

Abstract:

Facial expression communicates emotion, intention, and physical state, and regulates interpersonal behavior. Automated Face Analysis (AFA) for detection, synthesis, and understanding of facial expression is a vital focus of basic research. The field has become sufficiently mature to support initial applications in clinical and developmental science and commerce. I review human-observer based approaches to facial measurement that inform AFA, recent findings on the relation between facial expression, vocal prosody, and depression severity, and current challenges.  I give special attention with respect to the generalizability of AFA, performance metrics, and better understanding of operational parameters. 

 

Fernando De la Torre, Robotics Institute at CMU, USA

Title of the talk: Computational Face (tentative)

Abstract:

The face is one of the most powerful channels of nonverbal communication. Facial expression provides cues about emotion, intention, alertness, pain, personality, regulates interpersonal behavior, and communicates psychiatric and biomedical status among other functions. Within the past 30 years, there has been increasing interest in automated methods for facial image analysis from video. In this talk, I will discuss recent advances in machine learning techniques for facial expression analysis. In particular, I will review recent methods developed in the Human Sensing Laboratory (www.humansensing.cs.cmu.edu) for facial feature detection, algorithms for supervised facial expression detection (e.g., personalization of facial classifiers, early facial event detection, sample selection for action unit detection), and unsupervised methods for facial behavior analysis.

 

Organizers
Guoying Zhao, University of Oulu, Finland
Stefanos Zafeiriou, Imperial College London, UK
Matti Pietikäinen, University of Oulu, Finland
Maja Pantic, Imperial College London, UK

 

Program Committee
Richard Bowden, University of Surrey, UK
Judee Burgoon, University of Arizona, Tucson, USA   
Shaogang Gong, University of London, Queen Mary, UK
Venu Govindaraju, University of Buffalo, USA
Meetu Khosla, University of Delhi, India
Daniel McDuff, MIT, USA
Ioannis Patras, Queen Mary University, UK
Tomas Pfister, University of Oxford, UK
Sudeep Sarkar, University of South Florida, USA
Nicu Sebe, University of Trento, Italy
Tapio Seppänen, University of Oulu, Finland
Fernando De la Torre, CMU, USA
Sujing Wang: Institute of Psychology, Chinese Academy of Sciences
Lijun Yin, University of Binghamton, USA