Abstract: Facial related analysis represented milestones in the fields of computer vision for many decades. Lots of methods have been designed and implemented so as to solve the specific requirements. One of the methods, Relevance Vector Machines (RVM) stands
for a novel supervised learning technique that is based on a probabilistic approach of Support Vector Machines. The data for training were selected from the Cohn-Kanade FacialExpression Database. The application associated with the current research aims
at demonstrating the use of the RVM as a novel classifier for face detection.
Abstract: The current paper addresses the aspects related to the development of an automatic probabilistic recognition system for facialexpressions in video streams. The face analysis component integrates an eye tracking mechanism based on Kalman filter. The visual feature detection includes PCA oriented recognition for ranking the activity in certain facial areas. The description of the facialexpressions is given according to sets of atomic Action Units (AU) from the Facial Action Coding System (FACS). The base for the expression recognition engine is supported through a BBN model that also handles the time behavior of the visual features.
Abstract: Multimodal emotion recognition gets increasingly more attention from the scientific society. Fusing together information coming on different channels of communication, while taking into account the context seems the right thing to do. During social interaction the affective load of the interlocutors plays a major role. In the current paper we present a detailed analysis of the process of building an advanced multimodal data corpus for affective state recognition and related domains. This data corpus contains synchronized dual view acquired using high speed camera and high quality audio devices. We paid careful attention to the emotional content of the corpus in all aspects such as language content and facialexpressions. For recordings we implemented a TV prompter like software which controlled the recording devices and instructed the actors to assure the uniformity of the recordings. In this way we achieved a high quality controlled emotional data corpus.
Abstract: Facial related analysis represented milestones in the fields of computer vision for many decades. Lots of methods have been designed and implemented so as to solve the specific requirements. In the current paper we present three different classification algorithms that we use to fulfill the tasks concerning face detection and facialexpression recognition.
One of the methods, Relevance Vector Machines (RVM) stands for a novel supervised learning technique that is based on a probabilistic approach of Support Vector Machines. The mathematical base of the models is presented. The data for testing were selected from the Cohn-Kanade FacialExpression Database. We report recognition rates for six universal expressions based on a range of experiments. Some discussions on the comparison of different classification methods are included.
Abstract: A research project on stress assessment is
running at Delft University of Technology since 1992. One of the aims of the project is to develop an instrument for automated stress assessment. The underlying system is based on the analysis of facialexpressions, voice analysis and the analysis of physiological signals such as heart rate and blood pressure. Analysis of these multi-medial data takes place in parallel and are based on Artificial Intelligence technology. In each of the parallel subsystems, corresponding to sensor, image and sound data, the functionality is split up into a number of layers: filtering and reduction layer, preprocessing layer, processor layer, application layer and output layer. The results of the analysis are combined by a central interpreter, resulting in an overall stress measure. In this paper the stress assessment is used to monitor vigilance levels of car drivers with a focus on voice analysis.
Abstract: The system being described in the paper presents a Web interface for a fully automatic audio-video human emotion recognition. The analysis is focused on the set of six basic emotions plus the neutral type. Different classifiers are involved in the process of face detection (AdaBoost), facialexpression recognition (SVM and other models) and emotion recognition from speech (GentleBoost). The Active Appearance Model - AAM is used to get the information related to the shapes of the faces to be analyzed. The facialexpression recognition is frame based and no temporal patterns of emotions are managed. The emotion recognition from movies is done separately on sound and video frames. The algorithm does not handle the dependencies between audio and video during the analysis. The methodologies for data processing are explained and specific performance measures for the emotion recognition are presented.
Abstract: The study of human facialexpressions is one of the most challenging domains in pattern research community. Each facialexpression is generated by non-rigid object deformations and these deformations are person-dependent. Automatic recognition of facialexpressions is a process primarily based on analysis of permanent and transient features of the face, which can be only assessed with errors of some degree. The expression recognition model is oriented on the specification of Facial Action Coding System (FACS) of Ekman and Friesen [Ekman, Friesen 1978]. The hard constraints on the scene processing and recording conditions set a limited robustness to the analysis. In order to manage the uncertainties and lack of information, we set a probabilistic oriented framework up. The goal of the project was to design and implement a system for automatic recognition of human facialexpression in video streams. The results of the project are of a great importance for a broad area of applications that relate to both research and applied topics.
Abstract: At TUDelft there is a project aiming at the realization of a fully automatic emotion recognition system on the basis of facialanalysis. The exploited approach splits the system into four components. Face detection, facial characteristic point extraction, tracking and classification. The focus in this paper will only be on the first two components. Face
detection is employed by boosting simple rectangle Haar-like features that give a decent representation of the face. These features also allow the differentiation between a face and a non-face. The boosting algorithm is combined with an
Evolutionary Search to speed up the overall search time. Facial characteristic points (FCP) are extracted from the detected faces. The same technique applied on faces is utilized for this purpose. Additionally, FCP extraction using corner detection methods and brightness distribution has also been considered. Finally, after retrieving the required FCPs the emotion of the facialexpression can be determined. The classification of the Haar-like features is done by the Relevance Vector Machine (RVM).