Reinforcement learning is one of the key frameworks to enable open-ended learning, and it is the prime technique for decision making, optimal control and autonomous systems. Reinforcement learning techniques are at the core of systems such as AlphaGo and other board games and eSports bots developed by OpenAI and Deepmind. However, what it is often forgotten is that current reinforcement learning algorithms are particularly slow and computationally expensive to train and mostly unsuitable for online learning in embodied autonomous and assistive systems. Thus, we investigate how to make these algorithms faster and more efficient to a point in which they could be deployed in real autonomous intelligent systems and assistive systems.
Perception systems are crucial for the application of any learning algorithm to autonomous systems and automation. For the application of robot reaching and grasping, including smart prosthetics, we develop bio-inspired perceptual systems for haptic and vision, which would provide more suitable input to different stages of the autonomous learning model.
Autonomous intelligent and assistive systems are increasingly permeating our society, both on the factory floor and our daily life. As these systems become willingly or unwillingly part of our life, making them smarter will be necessary. However, we argue that it will not be sufficient to guarantee an effective and safe interaction. Thus, we study how to endow autonomous assistive systems with learning mechanisms to understand, some of, the users' intentions, as well as, adaptive mechanisms to display the intentions of autonomous assistive systems intuitively.