ACROSS: A Deformation-Based Cross-Modal Representation for Robotic Tactile Perception

recorded and edited: 19 May 2025

Tactile perception is essential for human interaction with the environment and is becoming increasingly crucial in robotics. Tactile sensors like the BioTac mimic human fingertips and provide detailed interaction data. Despite its utility in applications like slip detection and object identification, this sensor is now deprecated, making many existing valuable datasets obsolete. However, recreating similar datasets with newer sensor technologies is both tedious and time-consuming. Therefore, it is crucial to adapt these existing datasets for use with new setups and modalities. In response, we introduce ACROSS, a novel framework for translating data between tactile sensors by exploiting sensor deformation information. We demonstrate the approach by translating BioTac signals into the DIGIT sensor. Our framework consists of first converting the input signals into 3D deformation meshes. We then transition from the 3D deformation mesh of one sensor to the mesh of another, and finally convert the generated 3D deformation mesh into the corresponding output space. We demonstrate our approach to the most challenging problem of going from a low-dimensional tactile representation to a high-dimensional one. In particular, we transfer the tactile signals of a BioTac sensor to DIGIT tactile images. Our approach enables the continued use of valuable datasets and the exchange of data between groups with different setups.

ACROSS: A Deformation-Based Cross-Modal Representation for Robotic Tactile Perception
IEEE International Conference on Robotics and Automation (ICRA). pp. 1-8, Atlanta, GA, USA, May 2025
Zai El Amri, Wadhah; Kuhlmann, Malte; Navarro-Guerrero, Nicolás
doi, url, ©2024 The Authors., PDF, bibtex, key: ZaiElAmri2025DeformationBased, supplementary material,

Transferring Tactile Data Across Sensors
40th Anniversary of the IEEE Conference on Robotics and Automation (ICRA@40). pp. 1540-1542, Rotterdam, The Netherlands, Sep 2024
Zai El Amri, Wadhah; Kuhlmann, Malte; Navarro-Guerrero, Nicolás
doi, url, ©2024 The Authors., PDF, bibtex, key: ZaiElAmri2024Transferring,

Continual Domain Randomization

recorded and edited: 14 Oct 2024

Domain Randomization (DR) is commonly used for sim2real transfer of reinforcement learning (RL) policies in robotics. Most DR approaches require a simulator with a fixed set of tunable parameters from the start of the training, from which the parameters are randomized simultaneously to train a robust model for use in the real world. However, the combined randomization of many parameters increases the task difficulty and might result in sub-optimal policies. To address this problem and to provide a more flexible training process, we propose Continual Domain Randomization (CDR) for RL that combines domain randomization with continual learning to enable sequential training in simulation on a subset of randomization parameters at a time. Starting from a model trained in a non-randomized simulation where the task is easier to solve, the model is trained on a sequence of randomizations, and continual learning is employed to remember the effects of previous randomizations. Our robotic reaching and grasping tasks experiments show that the model trained in this fashion learns effectively in simulation and performs robustly on the real robot while matching or outperforming baselines that employ combined randomization or sequential randomization without continual learning. Our code and videos are available at this https URL.

Continual Domain Randomization
IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). pp. 1-8, Abu Dhabi, United Arab Emirates, Oct 2024
Josifovski, Josip; Auddy, Sayantan; Malmir, Mohammadhossein; Piater, Justus; Knoll, Alois; Navarro-Guerrero, Nicolás
doi, url, ©2024 The Authors., PDF, bibtex, key: Josifovski2024Continual, supplementary material,

A Biomimetic Fingerprint for Robotic Tactile Sensing

recorded and edited: 01 Sep 2023

Tactile sensors have been developed since the early '70s and have greatly improved, but there are still no widely adopted solutions. Various technologies, such as capacitive, piezoelectric, piezoresistive, optical, and magnetic, are used in haptic sensing. However, most sensors are not mechanically robust for many applications and cannot cope well with curved or sizeable surfaces. Aiming to address this problem, we present a 3D printed fingerprint pattern to enhance the body-borne vibration signal for dynamic tactile feedback. The 3D printed fingerprint patterns were designed and tested for an RH8D Adult size Robot Hand. The patterns significantly increased the signal's power to over 11 times the baseline. A public haptic dataset including 52 objects of several materials was created using the best fingerprint pattern and material.

A Biomimetic Fingerprint for Robotic Tactile Sensing
International Symposium on Robotics (ISR Europe). pp. 112-118, Stuttgart, Germany, Sep 2023
Juiña Quilachamín, Oscar Alberto; Navarro-Guerrero, Nicolás
doi, url, ©2023 The Authors., PDF, bibtex, key: JuinaQuilachamin2023Fingerprint, supplementary material,

Analysis of Randomization Effects on Sim2Real Transfer in Reinforcement Learning for Robotic Manipulation Tasks

recorded and edited: 23 Oct 2022

Current state-of-the-art approaches for transferring deep-learning models trained in simulation either rely on highly realistic simulations or employ randomization techniques to bridge the reality gap. However, such strategies do not scale well for complex robotic tasks. Highly-realistic simulations are computationally expensive and challenging to implement, while randomization techniques become sample-inefficient as the complexity of the task increases. This paper proposes a procedure for training on incremental simulations in a continual learning setup. We develop a simulation platform for the experimental analysis that can serve as a training environment and as a benchmark for continual and reinforcement learning sim2real approaches. The results show that training time for complex tasks can be reduced. Thus, we argue that Sequentially-Randomized Simulations improve the sim2real transfer.

Analysis of Randomization Effects on Sim2Real Transfer in Reinforcement Learning for Robotic Manipulation Tasks
IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). pp. 10193-10200, Kyoto, Japan, Oct 2022
Josifovski, Josip; Malmir, Mohammadhossein; Klarmann, Noah; Zagar, Bare Luka; Navarro-Guerrero, Nicolás; Knoll, Alois
doi, url, ©2022 The Authors., PDF, bibtex, key: Josifovski2022Analysis, supplementary material,

Evaluating Integration Strategies for Visuo-Haptic Object Recognition

recorded and edited: 01 Jun 2018

In computational systems for visuo-haptic object recognition, vision and haptics are often modeled as separate processes. But this is far from what really happens in the human brain, where cross- as well as multimodal interactions take place between the two sensory modalities. Generally, three main principles can be identified as underlying the processing of the visual and haptic object-related stimuli in the brain: (1) hierarchical processing, (2) the divergence of the processing onto substreams for object shape and material perception, and (3) the experience-driven self-organization of the integratory neural circuits. The question arises whether an object recognition system can benefit in terms of performance from adopting these brain-inspired processing principles for the integration of the visual and haptic inputs. To address this, we compare the integration strategy that incorporates all three principles to the two commonly used integration strategies in the literature. We collected data with a NAO robot enhanced with inexpensive contact microphones as tactile sensors. The results of our experiments involving every-day objects indicate that (1) the contact microphones are a good alternative to capturing tactile information and that (2) organizing the processing of the visual and haptic inputs hierarchically and in two pre-processing streams is helpful performance-wise. Nevertheless, further research is needed to effectively quantify the role of each identified principle by itself as well as in combination with others.

Evaluating Integration Strategies for Visuo-Haptic Object Recognition
Cognitive Computation. vol. 10, no. 3, pp. 408–425, Jun 2018
Toprak, Sibel; Navarro-Guerrero, Nicolás; Wermter, Stefan
doi, url, ©2017 The Authors., PDF, bibtex, key: Toprak2018Evaluating, supplementary material,

The Impact of Personalisation on Human-Robot Interaction in Learning Scenarios

recorded and edited: 01 Mar 2017

This video presents an interaction scenario, realised using the Neuro-Inspired Companion (NICO) robot. NICO engages the users in a personalised conversation where the robot always tracks the users' face, remembers them and interacts with them using natural language. NICO can also learn to perform tasks such as recognising and recalling objects and thus can assist users in their daily chores. The interaction system helps the users to interact as naturally as possible with the robot, enriching their experience with the robot, making it more interesting and engaging. The video presents the different methodologies used to implement the interaction scenario and their interplay. It then presents the NICO robot interacting with a user and using its visual and auditory capabilities to engage the user in an interesting and engaging conversation.

The Impact of Personalization on Human-Robot Interaction in Learning Scenarios
International Conference on Human-Agent Interaction (HAI). pp. 171-180, Bielefeld, Germany, Oct 2017
Churamani, Nikhil; Anton, Paul; Brügger, Marc; Fließwasser, Erik; Hummel, Thomas; Mayer, Julius; Mustafa, Waleed; Ng, Hwei Geok; Nguyen, Quan; Soll, Marcus; Springenberg, Sebastian; Griffiths, Sascha; Heinrich, Stefan; Navarro-Guerrero, Nicolás; Strahl, Erik; Twiefel, Johannes; Weber, Cornelius; Wermter, Stefan
doi, url, ©2017 The Authors., PDF, bibtex, key: Churamani2017Impact,

Hey Robot, Why Don’t You Talk to Me?
IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN). pp. 728-731, Lisbon, Portugal, Aug 2017
Ng, Hwei Geok; Anton, Paul; Brügger, Marc; Churamani, Nikhil; Fließwasser, Erik; Hummel, Thomas; Mayer, Julius; Mustafa, Waleed; Nguyen, Thi Linh Chi; Nguyen, Quan; Soll, Marcus; Springenberg, Sebastian; Griffiths, Sascha; Heinrich, Stefan; Navarro-Guerrero, Nicolás; Strahl, Erik; Twiefel, Johannes; Weber, Cornelius; Wermter, Stefan
doi, url, PDF, bibtex, key: Ng2017Hey, slides,

A Robotic Home Assistant with Memory Aid Functionality

recorded and edited: 01 May 2016

We present a robotic system that assists humans in their search for misplaced belongings within a natural home-like environment. Our stand-alone system integrates state-of-the-art approaches in a novel manner to achieve a seamless and intuitive human-robot interaction. The robot orients its gaze to the speaker and understands the person’s verbal instructions independent of specific grammatical constructions. It determines the positions of relevant objects and navigates collision-free within the environment. In addition, it produces natural language descriptions for the objects’ positions by using furniture as reference points.

A Robotic Home Assistant with Memory Aid Functionality
KI 2016: Advances in Artificial Intelligence. vol. 9904 of LNCS, pp. 102-115, Klagenfurt, Austria, Sep 2016
Wieser, Iris; Toprak, Sibel; Grenzing, Andreas; Hinz, Tobias; Auddy, Sayantan; Karaoğuz, Ethem Can; Chandran, Abhilash; Remmels, Melanie; Shinawi, Ahmed El; Josifovski, Josip; Vankadara, Leena Chennuru; Wahab, Faiz Ul; Bahnemiri, Alireza M.; Sahu, Debasish; Heinrich, Stefan; Navarro-Guerrero, Nicolás; Strahl, Erik; Twiefel, Johannes; Wermter, Stefan
doi, url, Copyright (©) 2016 Springer International Publishing AG, PDF, bibtex, key: Wieser2016Robotic, supplementary material,

Cognition Inspired Service Robotics

recorded and edited: 01 Sep 2012

We present a novel framework for robot mobile behaviour based on cognitive learning. Our approach builds up a cognitive map (Yan et al., 2012) that learns a sensorimotor representation and the salient visual features of an environment through exploratory navigation. The robot can find the position of a target object by comparing the features of the object with the appearance of the location in the map and navigate efficiently and robustly through a home-like environment. A vision-based docking model is trained with reinforcement learning that aligns the robot accurately to the object (Navarro-Guerrero et al., 2012). A SOM-based grasping model is executed after the docking phase to grasp the object so that it can be carried to the user. Overall, we demonstrate and test our system on a real-world object fetching scenario.

Real-World Reinforcement Learning for Autonomous Humanoid Robot Docking
Robotics and Autonomous Systems. vol. 60, no. 11, pp. 1400-1407, Nov 2012
Navarro-Guerrero, Nicolás; Weber, Cornelius; Schroeter, Pascal; Wermter, Stefan
doi, url, ©2012 Elsevier B.V. All rights reserved., PDF, bibtex, key: Navarro-Guerrero2012Real, source code,

A Neural Approach for Robot Navigation Based on Cognitive Map Learning
International Joint Conference on Neural Networks (IJCNN). pp. 1146-1153, Brisbane, QLD, Australia,
Yan, Wenjie; Weber, Cornelius; Wermter, Stefan
doi, url, ©2012, IEEE, key: Yan2012Neural,