Abstract: In recent years, we have developed a framework of humancomputerinteraction that offers recognition of various communication modalities including speech, lip movement, facial expression, handwriting and drawing, body gesture, text and visual symbols. The framework allows the rapid construction of a multimodal, multi-devices, and multi-user communication system within crisis management. This paper reports the multimodal information presentation module combining language, speech, visual-language and graphics, which can be used in isolation, but also as part of the framework. It provides a communication channel between the system and users with different communication devices. The module is able to specify and produce context-sensitive and user-tailored output. By the employment of ontology, it receives the system�s view about the world and dialogue actions from a dialogue manager and generates appropriate multimodal responses.
Abstract: In recent years, we have developed a framework of human-computerinteraction that offers recognition of various communication modalities including speech, lip movement, facial expression, handwriting and drawing, body gesture, text and visual symbols. The framework allows the rapid construction of a multimodal, multi-devices, and multi-user communication system within crisis management. This paper reports the approaches used in multi-user information integration and multimodal presentation modules, which can be used in isolation, but also as part of the framework. The latter is able to specify and produce context-sensitive and user-tailored output combining language, speech, visual-language and graphics. These modules provide a communication channel between the system and users with different communication devices. By the employment of ontology, the system's view about the world is constructed from multi-user observations and appropriate multimodal responses are generated.
Abstract: The successful application of ubiquitous computing in crisis management requires a thorough understanding of the mechanisms that extract information from sensors and communicate it via PDA’s to crisis workers. Whereas query and subscribe protocols are well studied mechanisms for information exchange between different computers, it is not straightforward how to apply them
for communication between a computer and a human crisis worker, with limited cognitive resources. To examine the imposed cognitive load, we focus on the relation of the information supply mechanism with the workflow, or task model, of the crisis worker.We formalize workflows and interaction mechanisms in colored Petri nets, specify various ways to relate them and discuss their pros and cons.
Abstract: As language is fundamental to human activities, proficiency in other languages becomes important. Besides for developing abilities for communication, the knowledge is also a tool for a survival. With the introduction of computerized mobile devices, i.e. PDAs, new opportunities for communicating in other language arose. This paper describes a new communication paradigm that is language independent using icon language on a PDA. Users can create iconic messages as realization of their concepts or ideas in mind. The proof of concept tool is able to interpret and convert the messages to (natural language) text and speech in different languages. To provide faster interactions in next icon selection, the tool has icon prediction. Our user test results confirmed that using provided icons our target users could express their concepts and ideas solely using a spatial arrangement of icons.
Abstract: The recognition of the internal emotional state of one person plays an important role in several human-related fields. Among them, human-computerinteraction has recently received special attention. The current research is aimed at the analysis of segmentation methods and of the performance of
the GentleBoost classifier on emotion recognition from speech. The data set used for emotion analysis is Berlin - a database of German emotional speech. A second data set is DES – Danish Emotional Speech
data set is used for comparison purposes. Our contribution for the research community consists in a novel extensive study on the efficiency of using distinct numbers of frames per speech utterance for emotion recognition. Eventually, a set of GentleBoost 'committees' with optimal classification rates is determined based on an exhaustive study on the generated classifiers and on different types of segmentation.
Abstract: In the past a crisis event was notified by local witnesses that use to make phone calls to the special services. They reported by speech according to their observation on the crisis site. The recent improvements in the area of humancomputer interfaces make possible the development of context-aware systems for crisis management that support people in escaping a crisis even before external help is available at site. Apart from collecting the people's reports on the crisis, these systems are assumed to automatically extract useful clues during typical humancomputerinteraction sessions. The novelty of the current research resides in the attempt to involve computer vision techniques for performing an automatic evaluation of facial expressions during human-computerinteraction sessions with a crisis management system. The current paper details an approach for an automatic facial expression recognition module that may be included in crisis-oriented applications. The algorithm uses Active Appearance Model for facial shape extraction and SVM classifier for Action Units detection and facial expression recognition.