Abstract: The human face in particular serves not only communicative functions, but it is also the primary channel to express emotion. We develop a prototype of a synthetic 3D face that shows emotion associated to text-based speech in an automated way. As a first step we studied how many and what kind of emotional expressions produced by humans during conversations. The next, we studied the correlation between the displayed facialexpressions and text. Based on these results, we developed a set of rules that describes dependencies between text and emotions by the employment of ontology. For this purpose, a 2D affective lexicon database has been built using WordNet database and the specific facialexpressions are stored in a nonverbal dictionary. The results described in this paper enable affective-based multimodal fission.
Abstract: Our software demo package consists of an implementation for an automatic human emotion recognition system. The system is bi-modal and is based on fusing of data regarding facialexpressions and emotion that has been extracted from speech signal. We have integrated Viola&Jones face detector (OpenCV), Active Appearance Model AAM (AAM-API) for extracting the face shape and Support Vector Machines (LibSVM) for the classification of emotion patterns. We have used Optical Flow algorithm for computing the features needed for the classification of facialexpressions. Beside the integration of all processing components, the software system accommodates our implementation for the data fusion algorithm. Our C++ implementation has a working frame-rate of about 5fps.
Abstract: The current paper addresses the aspects related to the development of an automatic probabilistic recognition system for facialexpressions in video streams. The face analysis component integrates an eye tracking mechanism based on Kalman filter. The visual feature detection includes PCA oriented recognition for ranking the activity in certain facial areas. The description of the facialexpressions is given according to sets of atomic Action Units (AU) from the Facial Action Coding System (FACS). The base for the expression recognition engine is supported through a BBN model that also handles the time behavior of the visual features.
Abstract: Multimodal emotion recognition gets increasingly more attention from the scientific society. Fusing together information coming on different channels of communication, while taking into account the context seems the right thing to do. During social interaction the affective load of the interlocutors plays a major role. In the current paper we present a detailed analysis of the process of building an advanced multimodal data corpus for affective state recognition and related domains. This data corpus contains synchronized dual view acquired using high speed camera and high quality audio devices. We paid careful attention to the emotional content of the corpus in all aspects such as language content and facialexpressions. For recordings we implemented a TV prompter like software which controlled the recording devices and instructed the actors to assure the uniformity of the recordings. In this way we achieved a high quality controlled emotional data corpus.
Abstract: For many decades automatic facial expression recognition has scientifically been considered a real challenging problem in the fields of pattern recognition or robotic vision. The current research aims at proposing Relevance Vector Machines (RVM) as a novel classification technique for the recognition of facialexpressions in static images. The aspects related to the use of Support Vector Machines are also presented. The data for testing were selected from the Cohn-Kanade Facial Expression Database. We report 90.84% recognition rates for RVM for six universal expressions based on a range of experiments. Some discussions on the comparison of
different classification methods are included.
Abstract: Facial related analysis represented milestones in the fields of computer vision for many decades. Lots of methods have been designed and implemented so as to solve the specific requirements. In the current paper we present three different classification algorithms that we use to fulfill the tasks concerning face detection and facial expression recognition.
One of the methods, Relevance Vector Machines (RVM) stands for a novel supervised learning technique that is based on a probabilistic approach of Support Vector Machines. The mathematical base of the models is presented. The data for testing were selected from the Cohn-Kanade Facial Expression Database. We report recognition rates for six universal expressions based on a range of experiments. Some discussions on the comparison of different classification methods are included.
Abstract: A research project on stress assessment is
running at Delft University of Technology since 1992. One of the aims of the project is to develop an instrument for automated stress assessment. The underlying system is based on the analysis of facialexpressions, voice analysis and the analysis of physiological signals such as heart rate and blood pressure. Analysis of these multi-medial data takes place in parallel and are based on Artificial Intelligence technology. In each of the parallel subsystems, corresponding to sensor, image and sound data, the functionality is split up into a number of layers: filtering and reduction layer, preprocessing layer, processor layer, application layer and output layer. The results of the analysis are combined by a central interpreter, resulting in an overall stress measure. In this paper the stress assessment is used to monitor vigilance levels of car drivers with a focus on voice analysis.
Abstract: The study of human facialexpressions is one of the most challenging domains in pattern research community. Each facial expression is generated by non-rigid object deformations and these deformations are person-dependent. Automatic recognition of facialexpressions is a process primarily based on analysis of permanent and transient features of the face, which can be only assessed with errors of some degree. The expression recognition model is oriented on the specification of Facial Action Coding System (FACS) of Ekman and Friesen [Ekman, Friesen 1978]. The hard constraints on the scene processing and recording conditions set a limited robustness to the analysis. In order to manage the uncertainties and lack of information, we set a probabilistic oriented framework up. The goal of the project was to design and implement a system for automatic recognition of human facial expression in video streams. The results of the project are of a great importance for a broad area of applications that relate to both research and applied topics.
Abstract: Emotion influences the choice of facial expression. In a dialogue the emotional state is co-determined by the events that happen during a dialogue. To enable rich, human like expressiveness of a dialogue agent, the facial displays should show a correct expression of the state of the agent in the dialogue. This paper reports about our study in building knowledge on how to appropriately express emotions in face to face communication. We have analyzed the appearance of facialexpressions and corresponding dialogue-text (in balloons) of characters of selected cartoon illustrations. From the facialexpressions and dialogue-text, we have extracted independently the emotional state and the communicative function. We also collected emotion words from the dialogue-text. The emotional states (label) and the emotion words are represented along two dimensions �arousal� and �valence�. Here, the relationship between facialexpressions and text were explored. The final goal of this research is to develop emotional-display rules for a text-based dialogue agent.
Abstract: In the past a crisis event was notified by local witnesses that use to make phone calls to the special services. They reported by speech according to their observation on the crisis site. The recent improvements in the area of human computer interfaces make possible the development of context-aware systems for crisis management that support people in escaping a crisis even before external help is available at site. Apart from collecting the people's reports on the crisis, these systems are assumed to automatically extract useful clues during typical human computer interaction sessions. The novelty of the current research resides in the attempt to involve computer vision techniques for performing an automatic evaluation of facialexpressions during human-computer interaction sessions with a crisis management system. The current paper details an approach for an automatic facial expression recognition module that may be included in crisis-oriented applications. The algorithm uses Active Appearance Model for facial shape extraction and SVM classifier for Action Units detection and facial expression recognition.