Abstract: Omnidirectional vision is currently an important sensor in robotic research. The catadioptric omnidirectional camera with a hyperbolic convex mirror is a common omnidirectional vision system in the robotics research field as it has many advantages over other vision systems. This paper describes the development and validation of such a system for the RoboCup Rescue League simulator USARSim.
After an introduction of the mathematical properties of a real catadioptric omnidirectional camera we give a general overview of the simulation method. We then compare different 3D mirror meshes with respect to quality and system performance. Simulation data also is compared to real omnidirectional vision data obtained on an 4-Legged League soccer field. Comparison is based on using color histogram landmark detection and robot self-localization based on an Extended Kalman filter.
Abstract: For many decades automatic facial expression recognition has scientifically been considered a real challenging problem in the fields of pattern recognition or robotic vision. The current research aims at proposing Relevance Vector Machines (RVM) as a novel classification technique for the recognition of facial expressions in static images. The aspects related to the use of Support Vector Machines are also presented. The data for testing were selected from the Cohn-Kanade Facial Expression Database. We report 90.84% recognition rates for RVM for six universal expressions based on a range of experiments. Some discussions on the comparison of
different classification methods are included.
Abstract: Advances in network technologies enable distributed systems, operating in complex physical environments, to co-ordinate their activities over larger areas within shorter time intervals. Some envisioned application domains for such systems are defence, crisis management, traffic management and public safety. In these systems humans and machines will, in close interaction, be adaptive to a changing environment. Various architecture models are proposed for such Networked Adaptive Interactive Hybrid Systems (NAIHS) from different research areas like (networked) sensor fusion, command and control, artificial intelligence, robotics and human machine interaction. In this paper an architecture model is proposed that seeks to combine their merits. The NAIHS model focuses on the ‘hybrid mind’ that is layered in several dimensions
defining specific functional components and their
interactions. Subsequently, the interaction between the human and artificial part of the system is discussed.
Abstract: The abilities of mobile robots depend greatly on the performance of basic skills such as vision and localization. Although great progress has been made in the 4-Legged league in the past years, the performance of many of those approaches completely depends on the artificial environment conditions established on a 4-Legged soccer field. In this article, an algorithm is introduced that can provide localization information based on the natural appearance of the surroundings of the field. The algorithm starts making a scan of the surroundings by turning head and body of the robot on a certain spot. The robot learns the appearance of the surroundings at that spot by storing color transitions at different angles in a panoramic index. The stored panoramic appearance can be used to determine the rotation (including a confidence value) relative to the learned spot for other points on the field. The applicability of this kind of localization for more natural environments is demonstrated in two environments other than the official 4-Legged league field.