Abstract: Computer-Assisted Instruction systems (CAI) enable fully automated simulator-based training. Traditionally, a CAI system does not enable a true dialogue between the learner and the virtual instructor. Most frequently, the system acts like a human expert, and authoritatively provides feedback and ways to improve the task performance. In this conference paper, we describe an educational agent that enables a dialogue between the learner and the agent. The agent is called the companion agent. It acts like a virtual co-learner, for example by deliberating about new operational measures after a situation-change. The agent operates on the same authority level as the learner, and is therefore less threatening than a traditional virtual instructor. We believe companion agents are typically useful in modern, constructive learning situations where learners can take control of their own learning process. Potential applications of companion agents lie within the civil area (for example a civil tunnel operator during tunnel surveillance training) and the military area (for example embedded training in tactical surveillance).
This paper was selected as one of the Continuing Education Unit (CEU) papers for the 2007 Interservice/Industry Training, Simulation and Education Conference (I/ITSEC). The I/ITSEC board states that only those papers that demonstrate exceptional innovation, research, experimentation, and documentation in an area of new technology are selected for CEU credit.
Abstract: The collaboration between humans (actors) and artificial entities (agents) can be a potential performance boost.
Agents, as complementary artificial intelligent entities, can alleviate actors from certain activities, while enlarging
the collective effectiveness. This paper describes our approach for experimentation with actors, agents and their
interaction. This approach is based on a principled combination of existing empirical research methods and is
illustrated by a small experiment which assesses the performance of a specific actor-agent team in comparison with
an actor-only team in an incident management context. The REsearch and Simulation toolKit (RESK) is
instrumental for controlled and repeatable experimentation. The indicative findings show that the approach is viable
and forms a basis for further data collection and comparative experiments. The approach supports applied actor-
agent research to show its (dis)advantages as compared to actor-only solutions.
Abstract: In this position paper a number of hypotheses are posited concerning the effect of measurable human factors, such as subjective stress, arousal and mood, on the performance of human decision making; taking into account the amount of risk involved in the decision. The proposed domain of application is crisis management: a situation in which time-limits, uncertainty and possibly dire consequences provide an ideal context to assess the validity of our hypotheses. Experimentation involves both people in management functions and non-management functions. The final objective is to provide the basis for a demonstrator which can measure mood, arousal and subjective stress on the job, provide runtime feedback and ergo positively influence human decision making processes.