Abstract: Our work addresses the problem of autonomous concept formation from a design point of view, providing an initial answer to the question: What are the design features of an architecture supporting the acquisition of different types of concepts by an autonomous agent?
Autonomous agents, that is systems capable of interacting independently with their environment in the pursuit of their own goals, will provide the framework in which we study the problem of autonomous concept formation. Humans and most animals may in this sense also be regarded as autonomous agents, but our concern will be with artiﬁcial autonomous agents. A detailed survey and discussion of the many issues surrounding the notion of ‘artiﬁcial agency’ is beyond the scope of this thesis and a good overview can be found in [Wooldridge and Jennings, 1995]. Instead we will focus on how artiﬁcial agents could be endowed with representational and modelling capabilities.
The ability to form concepts is an important and recognised cognitive ability, thought to play an essential role in related abilities such as categorisation, language understanding, object identiﬁcation and recognition, reasoning, all of which can be seen as different aspects of intelligence. Concepts and categories are studied within cognitive science, where scientists are concerned with human conceptual abilities and mental representations of categories, but they have been addressed also in the rather different domain of machine learning and classiﬁcatory data analysis, where the focus is on the development of algorithms for clustering problems and induction problems [Mechelen et al., 1993]. The two ﬁelds are well distinct and only recently have started to interact, but even though the importance of concepts have been recognised, the nature of concepts is controversial, in the sense that there is no commonly agreed theory of concepts, and it is still far from obvious which representational means are most suited to capture the many cognitive functions that concepts are involved in.
Among the goals of this thesis there is the attempt to bring together different lines of argumentation that have emerged within philosophy, cognitive science and AI, in order to establish a solid foundation for further research into the representation and acquisition of concepts by autonomous agents. Thus, our results and conclusions will often be stated in terms of new insights and ideas, rather than resulting in new algorithms or formal methods.
Our focus will be on affordance concepts — discussed in detail in Chapter 4 — and our main contributions will be:
* An argument showing that concepts should be thought of as belonging to different kinds, where the differences among these kinds are to be captured in terms of architecture features supporting their acquisition.
* A description (and partial implementation) of a minimal architecture (the Innate Adaptive Behaviour architecture – IAB architecture for short) supporting the acquisition of affordance concepts; the IAB architecture is actually a proposal for a sustaining mechanism, in the sense of [Margolis, 1999], for affordances, and makes clear the necessity of a minimal structure for the representation of affordances.
When addressing concept formation in AI, what can be called the ‘system level’ is often overlooked, which means that concepts and categories are rarely studied from the point of view of a system, autonomous and complete, that might need such constructs and can acquire them only by means of interactions with its environment, under the constraints of its cognitive architecture. Also within psychology, the focus is usually on structural aspects of concepts rather than on developmental issues [Smith and Medin, 1981]. Our approach – an architecture-based approach – is an attempt (i) to show that a system level perspective on concept formation is indeed possible and worth exploring, and (ii) to provide an initial, maybe simple, but concrete example of the insights that can be gained from such an approach. Since the methodology that we propose to study concept formation is a general one, and can be applied also to other types of concepts, we decided to mention broadly ‘autonomous concept formation’ rather than ‘autonomous affordance-concepts formation’ in the title of the thesis.
Abstract: Even though adaptive (trainable) spam filters are a common example of systems that make (semi-)autonomous decisions on behalf of the user, trust in these filters has been underexplored. This paper reports a study of usage of spam filters in the daily workplace and user behaviour in training these filters (N=43). User observation, interview and survey techniques were applied to investigate attitudes towards two types of filters: a user-adaptive (trainable) and a rule-based filter. While many of our participants invested extensive effort in training their filters, training did not influence filter trust. Instead, the findings indicate that users' filter awareness and understanding seriously impacts attitudes and behaviour. Specific examples of difficulties related to awareness of filter activity and adaptivity are described showing concerns relevant to all adaptive and (semi-)autonomous systems that rely on explicit user feedback.
Abstract: This deliverable explores basic characteristics of human groups and teams in order to derive implications for actor-agent teams (AAT’s). From a socio-psychological group dynamics perspective group developmental stages, membership, cohesion, subgroups, social status, roles, norms and leadership are defined and explained in order to enhance the understanding of the processes that are part of human group behavior. The document subsequently briefly explains what ‘actor-agent-team’ means, making the assumption that the factors that play a role in human-only teams also play a role in AAT’s and putting further implications to discussion.
Abstract: We propose a novel bound on single-variable marginal probability distributions in
factor graphs with discrete variables. The bound is obtained by propagating local
bounds (convex sets of probability distributions) over a subtree of the factor graph,
rooted in the variable of interest. By construction, the method not only bounds
the exact marginal probability distribution of a variable, but also its approximate
Belief Propagation marginal (“belief”). Thus, apart from providing a practical
means to calculate bounds on marginals, our contribution also lies in providing
a better understanding of the error made by Belief Propagation. We show that
our bound outperforms the state-of-the-art on some inference problems arising in
Abstract: Integration of UAVs with Air Traffic Control (ATC) is a world wide problem. ATC is already troubled by capacity
problems due to a vast amount of air traffic. In the future when large numbers of Unmanned Aerial Vehicles
(UAVs) will participate in the same airspace, the situation cannot afford to have UAVs that need special attention.
Regulations for UAV flights in civil airspace are still being developed but it is expected that authorities will require
UAVs to operate ‘like manned aircraft’. The implication is that UAVs need to become full participants of a complex
socio-technical environment and need to generate ‘man like’ decisions and behavior. In order to deal with the
complexity a novel approach to developing UAV autonomy is needed, aimed to create an environment that fosters
shared situation awareness between the UAVs, pilots and controllers. The underlying principle is to develop an
understanding of the work domain that can be shared between people and UAVs. A powerful framework to
represent the meaningful structure of the environment is Rasmussen’s abstraction hierarchy. This paper proposes
that autonomous UAVs can base their reasoning, decisions and actions on the abstraction hierarchy framework and
communicate about their goals and intentions with human operators. It is hypothesized that the properties of the
framework can create ‘shared situation awareness’ between the artificial and human operators despite the
differences in their internal workings.
Abstract: Emergency situations occur unpredictably and cause individuals and organizations to shift their focus and attention immediately to dealing with the situation. When disasters become large in scale all the limitations resulting from a lack of integration and collaboration among all the involved organizations begin to expose themselves and further compound the negative consequences of the event. Often in large scale disasters the people who must work together have no history of doing so, they have not developed a trust or understanding of one another’s abilities, and the totality of resources they each bring to bear were never before exercised. As a result, the challenges for individual or group decision support systems (DSS) in emergency situations are diverse and immense. In this chapter, we present recent advances in this area and highlight important remaining challenges.
Abstract: Emergency managers need to assess, combine and process large volumes of information with varying degrees of
(un)certainty. To keep track of the uncertainties and to facilitate gaining an understanding of the situation, the
information is combined into scenarios: stories about the situation and its development. As the situation evolves,
typically more information becomes available and already acknowledged information is changed or revised.
Meanwhile, decision-makers need to keep track of the scenarios including an assessment whether the infor-mation constituting the scenario is still valid and relevant for their purposes. Standard techniques to support sce-nario updating usually involve complete scenario re-construction. This is far too time-consuming in emergency
management. Our approach uses a graph theoretical scenario formalisation to enable efficient scenario updating.
MCDA techniques are employed to decide whether information changes are sufficiently important to warrant
scenario updating. A brief analysis of the use-case demonstrates a large gain in efficiency.
Abstract: The successful application of ubiquitous computing in crisis management requires a thorough understanding of the mechanisms that extract information from sensors and communicate it via PDA’s to crisis workers. Whereas query and subscribe protocols are well studied mechanisms for information exchange between different computers, it is not straightforward how to apply them
for communication between a computer and a human crisis worker, with limited cognitive resources. To examine the imposed cognitive load, we focus on the relation of the information supply mechanism with the workflow, or task model, of the crisis worker.We formalize workflows and interaction mechanisms in colored Petri nets, specify various ways to relate them and discuss their pros and cons.
Abstract: Improving our knowledge of and capabilities to handle disasters and crises is not simply a matter of more information processing and more reliable communication and computation. It needs the exchange of information between many different scientific and technology disciplines and a much better understanding of engineering complex C4I systems-of-systems. This discussion paper will address the need for and purpose of an international community and how to obtain focus and transfer of scientific results.
Abstract: Choice of an incorrect representation for the design of automation can dramatically increase
system complexity. Principles from Cognitive Systems Engineering (CSE), which can be used to
identify good representations about the way the ‘world works’, provide a good starting point
for automation design.
This paper discusses, that by choosing the right model for automation design the added
complexity can be limited. But what is the right model for automation? The model of the
environment, or ecology, is preferred above the mental models that human operators have
developed through interacting with the system. The technology has altered the work
environment of the human operator and can have implied to complex or too simplified mental
models. A too complex mental model will bring a too high cognitive load and a simplified
mental model will not be sufficient in all situations. Using the ecology as the basis for the
model of automation, the complexity of the automation is constrained to that of the actual
environment with a minimum share of automation induced complexity.
To illustrate this we considered the design of a conventional autopilot and one based on total
energy control and discuss the mental model pilots have for energy control. Energy control is
the fundamental physics of flight. It is part of the environment thus ecology for pilots and a
proper understanding of energy control helps the pilot to deal with unanticipated event as the
mountain wave condition.