Evaluating Integration Strategies for Visuo-Haptic Object Recognition

recorded and edited: 01 Jun 2018

In computational systems for visuo-haptic object recognition, vision and haptics are often modeled as separate processes. But this is far from what really happens in the human brain, where cross- as well as multimodal interactions take place between the two sensory modalities. Generally, three main principles can be identified as underlying the processing of the visual and haptic object-related stimuli in the brain: (1) hierarchical processing, (2) the divergence of the processing onto substreams for object shape and material perception, and (3) the experience-driven self-organization of the integratory neural circuits. The question arises whether an object recognition system can benefit in terms of performance from adopting these brain-inspired processing principles for the integration of the visual and haptic inputs. To address this, we compare the integration strategy that incorporates all three principles to the two commonly used integration strategies in the literature. We collected data with a NAO robot enhanced with inexpensive contact microphones as tactile sensors. The results of our experiments involving every-day objects indicate that (1) the contact microphones are a good alternative to capturing tactile information and that (2) organizing the processing of the visual and haptic inputs hierarchically and in two pre-processing streams is helpful performance-wise. Nevertheless, further research is needed to effectively quantify the role of each identified principle by itself as well as in combination with others.

Evaluating Integration Strategies for Visuo-Haptic Object Recognition
Cognitive Computation. vol. 10, no. 3, pp. 408–425, Jun 2018
Toprak, Sibel; Navarro-Guerrero, Nicolás; Wermter, Stefan
doi, url, ©2017 The Authors., PDF, bibtex, key: Toprak2018Evaluating, supplementary material,

The Impact of Personalisation on Human-Robot Interaction in Learning Scenarios

recorded and edited: 01 Mar 2017

This video presents an interaction scenario, realised using the Neuro-Inspired Companion (NICO) robot. NICO engages the users in a personalised conversation where the robot always tracks the users' face, remembers them and interacts with them using natural language. NICO can also learn to perform tasks such as recognising and recalling objects and thus can assist users in their daily chores. The interaction system helps the users to interact as naturally as possible with the robot, enriching their experience with the robot, making it more interesting and engaging. The video presents the different methodologies used to implement the interaction scenario and their interplay. It then presents the NICO robot interacting with a user and using its visual and auditory capabilities to engage the user in an interesting and engaging conversation.

The Impact of Personalization on Human-Robot Interaction in Learning Scenarios
International Conference on Human-Agent Interaction (HAI). pp. 171-180, Bielefeld, Germany, Oct 2017
Churamani, Nikhil; Anton, Paul; Brügger, Marc; Fließwasser, Erik; Hummel, Thomas; Mayer, Julius; Mustafa, Waleed; Ng, Hwei Geok; Nguyen, Quan; Soll, Marcus; Springenberg, Sebastian; Griffiths, Sascha; Heinrich, Stefan; Navarro-Guerrero, Nicolás; Strahl, Erik; Twiefel, Johannes; Weber, Cornelius; Wermter, Stefan
doi, url, ©2017 The Authors., PDF, bibtex, key: Churamani2017Impact,

Hey Robot, Why Don’t You Talk to Me?
IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN). pp. 728-731, Lisbon, Portugal, Aug 2017
Ng, Hwei Geok; Anton, Paul; Brügger, Marc; Churamani, Nikhil; Fließwasser, Erik; Hummel, Thomas; Mayer, Julius; Mustafa, Waleed; Nguyen, Thi Linh Chi; Nguyen, Quan; Soll, Marcus; Springenberg, Sebastian; Griffiths, Sascha; Heinrich, Stefan; Navarro-Guerrero, Nicolás; Strahl, Erik; Twiefel, Johannes; Weber, Cornelius; Wermter, Stefan
doi, url, PDF, bibtex, key: Ng2017Hey, slides,

A Robotic Home Assistant with Memory Aid Functionality

recorded and edited: 01 May 2016

We present a robotic system that assists humans in their search for misplaced belongings within a natural home-like environment. Our stand-alone system integrates state-of-the-art approaches in a novel manner to achieve a seamless and intuitive human-robot interaction. The robot orients its gaze to the speaker and understands the person’s verbal instructions independent of specific grammatical constructions. It determines the positions of relevant objects and navigates collision-free within the environment. In addition, it produces natural language descriptions for the objects’ positions by using furniture as reference points.

A Robotic Home Assistant with Memory Aid Functionality
KI 2016: Advances in Artificial Intelligence. vol. 9904 of LNCS, pp. 102-115, Klagenfurt, Austria, Sep 2016
Wieser, Iris; Toprak, Sibel; Grenzing, Andreas; Hinz, Tobias; Auddy, Sayantan; Karaoğuz, Ethem Can; Chandran, Abhilash; Remmels, Melanie; Shinawi, Ahmed El; Josifovski, Josip; Vankadara, Leena Chennuru; Wahab, Faiz Ul; Bahnemiri, Alireza M.; Sahu, Debasish; Heinrich, Stefan; Navarro-Guerrero, Nicolás; Strahl, Erik; Twiefel, Johannes; Wermter, Stefan
doi, url, Copyright (©) 2016 Springer International Publishing AG, PDF, bibtex, key: Wieser2016Robotic, supplementary material,

Cognition Inspired Service Robotics

recorded and edited: 01 Sep 2012

We present a novel framework for robot mobile behaviour based on cognitive learning. Our approach builds up a cognitive map (Yan et al., 2012) that learns a sensorimotor representation and the salient visual features of an environment through exploratory navigation. The robot can find the position of a target object by comparing the features of the object with the appearance of the location in the map and navigate efficiently and robustly through a home-like environment. A vision-based docking model is trained with reinforcement learning that aligns the robot accurately to the object (Navarro-Guerrero et al., 2012). A SOM-based grasping model is executed after the docking phase to grasp the object so that it can be carried to the user. Overall, we demonstrate and test our system on a real-world object fetching scenario.

Real-World Reinforcement Learning for Autonomous Humanoid Robot Docking
Robotics and Autonomous Systems. vol. 60, no. 11, pp. 1400-1407, Nov 2012
Navarro-Guerrero, Nicolás; Weber, Cornelius; Schroeter, Pascal; Wermter, Stefan
doi, url, ©2012 Elsevier B.V. All rights reserved., PDF, bibtex, key: Navarro-Guerrero2012Real, source code,

A Neural Approach for Robot Navigation Based on Cognitive Map Learning
International Joint Conference on Neural Networks (IJCNN). pp. 1146-1153, Brisbane, QLD, Australia,
Yan, Wenjie; Weber, Cornelius; Wermter, Stefan
doi, url, ©2012, IEEE, key: Yan2012Neural,