Abstract: This research report discusses human group characteristics as a stepping stone to study human-agent team characteristics and dynamics. A human-agent team, or so called actor-agent team (AAT) is a group of humans and agents who interact in a coherent and coordinated way towards a common goal. The concept of AATs relates to actor-agent communities (AACs), as AACs are groups of humans and artificial systems (socio-technical information systems) that intimately work together to achieve a common goal (i.e. solve a problem) (Iacob et al., 2009).
AATs are envisioned to increase human performance in (among others) safety and security domains, emergency management, and traffic control. However, the concept of AATs brings many challenges. Besides the realisation of agents as teammembers, and the realisation of real-world AATs, the interaction between agents and humans is a challenge. If agents are to become (task performing) group members, team membership requires much from agents regarding human-agent interaction. How should agents be designed to become teammembers in an AAT? How can humans best interact with agents? When do trust an agent, or rely on it?
This document discusses human group characteristics to draw implications for AAT dynamics. This document is a follow-up of Gouman et al. (2008) in which stages of team development, group membership and cohesion, subgroups, norms, roles, status, and leadership were discussed. The current report first addresses communication and decision making, after which team performance and implications for AATs are discussed.
Abstract: The project ‘SlimVerbinden’ addresses the challenge of retaining
autonomy while sharing information among multiple parties. Based on a web of
trust, information providers can grant and deny access to information, while
information consumers can delegate access to specific members within their
‘organization’ (which can be defined within and/or across existing organizations).
The policy- and PKI-based realization enables an agent-based secure shared distributed
dataspace where no single party knows ‘everything’ and the barriers to
information sharing are lowered. The use-case involves public–private cooperation
during the mitigation of an incident and drives the development of an operational
Abstract: Even though adaptive (trainable) spam filters are a common example of systems that make (semi-)autonomous decisions on behalf of the user, trust in these filters has been underexplored. This paper reports a study of usage of spam filters in the daily workplace and user behaviour in training these filters (N=43). User observation, interview and survey techniques were applied to investigate attitudes towards two types of filters: a user-adaptive (trainable) and a rule-based filter. While many of our participants invested extensive effort in training their filters, training did not influence filter trust. Instead, the findings indicate that users' filter awareness and understanding seriously impacts attitudes and behaviour. Specific examples of difficulties related to awareness of filter activity and adaptivity are described showing concerns relevant to all adaptive and (semi-)autonomous systems that rely on explicit user feedback.
Abstract: Emergency situations occur unpredictably and cause individuals and organizations to shift their focus and attention immediately to dealing with the situation. When disasters become large in scale all the limitations resulting from a lack of integration and collaboration among all the involved organizations begin to expose themselves and further compound the negative consequences of the event. Often in large scale disasters the people who must work together have no history of doing so, they have not developed a trust or understanding of one another’s abilities, and the totality of resources they each bring to bear were never before exercised. As a result, the challenges for individual or group decision support systems (DSS) in emergency situations are diverse and immense. In this chapter, we present recent advances in this area and highlight important remaining challenges.
Abstract: In-vehicle agents can potentially avert dangerous driving situations by adapting to the driver, context and traffic conditions. However, perceptions of system autonomy, the way agents offer assistance, driving contexts and users’ personality traits can all affect acceptance and trust. This paper reports on a survey-based experiment (N=100) that further investigates how these factors affect attitudes. The 2x2, between-subject, video-based design varied driving context (high, low density traffic) and type of agent (providing information, providing instructions). Both type of agent and traffic context affected attitudes towards the agent, with attitudes being most positive towards the instructive agent in a light traffic context. Participants scoring high on locus of control reported a higher intent to follow-up on the agent's instructions. Driving-related anxiety and aggression increased perceived urgency of the video scenario.
Abstract: This work deals with the question how to design Embodied Conversational Agents (ECAs) in such a way that users will experience attractive, human-like interaction with a trustworthy and competent social entity. We identify a number of design issues that need to be addressed in order to achieve this. The design issues are divided into the four categories, that is: appearance, social context, communication and domain expertise. By means of an experiment with a prototype agent we establish a set of practical design guidelines for a subset of the design issues identified.
Abstract: This PhD-project investigates interaction with user-adaptive systems. Experiments and user studies are used to explore the factors that lead to trust and acceptance of such systems. This research aims to inform design of transparent user-adaptive and (semi-)autonomous systems. Focus is on interaction with content-based user-adaptive information filters.
Abstract: Mobile services can provide users with information relevant to their current circumstances. Distant services in turn can acquire local information from people in an area of interest. Socially expressive agent behaviour has been suggested as a way to build reciprocal relationships and to increase user
response to such requests. This between-subject, Wizard-of-Oz experiment aimed to investigate the potential of such behaviours. 44 participants performed a search task in an urgent context while being interrupted by a mobile agent that both provided and requested information. The socially
expressive behaviour shown in this study did not increase compliance to requests; it instead reduced trust in provided information and compliance to warnings. It also negatively impacted the affective experience of users scoring lower on empathy as a personality trait. Inappropriate social expressiveness can have serious consequences; we here
elaborate on the reasons for our negative results.