recorded and edited: 01 Mar 2017
This video presents an interaction scenario, realised using the Neuro-Inspired Companion (NICO) robot. NICO engages the users in a personalised conversation where the robot always tracks the users' face, remembers them and interacts with them using natural language. NICO can also learn to perform tasks such as remembering and recalling objects and thus can assist users in their daily chores. The interaction system helps the users to interact as naturally as possible with the robot, enriching their experience with the robot, making it more interesting and engaging. The video presents the different methodologies used to implement the interaction scenario and their interplay. It then presents the NICO robot interacting with a user and using its visual and auditory capabilities to engage the user in an interesting and engaging conversation.
The Impact of Personalization on Human-Robot Interaction in Learning Scenarios
International Conference on Human-Agent Interaction (HAI). pp. 171-180, Bielefeld, Germany, Oct 2017.
Churamani, Nikhil; Anton, Paul; Brügger, Marc; Fließwasser, Erik; Hummel, Thomas; Mayer, Julius; Mustafa, Waleed; Ng, Hwei Geok; Nguyen, Quan; Soll, Marcus; Springenberg, Sebastian; Griffiths, Sascha; Heinrich, Stefan; Navarro-Guerrero, Nicolás; Strahl, Erik; Twiefel, Johannes; Weber, Cornelius; Wermter, Stefan;
doi, url, ©2017 The Authors., PDF, bibtex, key: Churamani2017Impact,
Hey Robot, Why Don’t You Talk to Me?
IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN). pp. 728-731, Lisbon, Portugal, Aug 2017.
Ng, Hwei Geok; Anton, Paul; Brügger, Marc; Churamani, Nikhil; Fließwasser, Erik; Hummel, Thomas; Mayer, Julius; Mustafa, Waleed; Nguyen, Thi Linh Chi; Nguyen, Quan; Soll, Marcus; Springenberg, Sebastian; Griffiths, Sascha; Heinrich, Stefan; Navarro-Guerrero, Nicolás; Strahl, Erik; Twiefel, Johannes; Weber, Cornelius; Wermter, Stefan;
doi, url, PDF, bibtex, key: Ng2017Hey, slides,
recorded and edited: 01 May 2016
We present a robotic system that assists humans in their search for misplaced belongings within a natural home-like environment. Our stand-alone system integrates state-of-the-art approaches in a novel manner to achieve a seamless and intuitive human-robot interaction. The robot orients its gaze to the speaker and understands the person’s verbal instructions independent of specific grammatical constructions. It determines the positions of relevant objects and navigates collision-free within the environment. In addition, it produces natural language descriptions for the objects’ positions by using furniture as reference points.
A Robotic Home Assistant with Memory Aid Functionality
KI 2016: Advances in Artificial Intelligence. vol. 9904 of LNCS, pp. 102-115, Klagenfurt, Austria, Sep 2016.
Wieser, Iris; Toprak, Sibel; Grenzing, Andreas; Hinz, Tobias; Auddy, Sayantan; Karaoğuz, Ethem Can; Chandran, Abhilash; Remmels, Melanie; Shinawi, Ahmed El; Josifovski, Josip; Vankadara, Leena Chennuru; Wahab, Faiz Ul; Bahnemiri, Alireza M.; Sahu, Debasish; Heinrich, Stefan; Navarro-Guerrero, Nicolás; Strahl, Erik; Twiefel, Johannes; Wermter, Stefan;
doi, url, Copyright (©) 2016 Springer International Publishing AG, PDF, bibtex, key: Wieser2016Robotic, supplementary material,
recorded and edited: 01 Sep 2012
We present a novel framework for robot mobile behaviour based on cognitive learning. Our approach builds up a cognitive map (Yan et al., 2012) that learns a sensorimotor representation and the salient visual features of an environment through exploratory navigation. The robot can find the position of a target object by comparing the features of the object with the appearance of the location in the map and navigate efficiently and robustly through a home-like environment. A vision-based docking model is trained with reinforcement learning that aligns the robot accurately to the object (Navarro-Guerrero et al., 2012). A SOM-based grasping model is executed after the docking phase to grasp the object so that it can be carried to the user. Overall, we demonstrate and test our system on a real-world object fetching scenario.
Real-World Reinforcement Learning for Autonomous Humanoid Robot Docking
Robotics and Autonomous Systems. vol. 60, no. 11, pp. 1400-1407, Nov 2012.
Navarro-Guerrero, Nicolás; Weber, Cornelius; Schroeter, Pascal; Wermter, Stefan;
doi, url, ©2012 Elsevier B.V. All rights reserved., PDF, bibtex, key: Navarro-Guerrero2012Real, source code,
A Neural Approach for Robot Navigation Based on Cognitive Map Learning
International Joint Conference on Neural Networks (IJCNN). pp. 1146-1153, Brisbane, QLD, Australia,
Yan, Wenjie; Weber, Cornelius; Wermter, Stefan;
doi, url, ©2012, IEEE, key: Yan2012Neural,