We propose a layered API for modular robots where AI plays a key role enhancing primitives at different levels and making robots more capable.
A developer's oriented API powered by ROS 2 and Gazebo. Built on top of the HRIM. We make use of AI techniques to accelerate and enhance modules. Examples include the acceleration of sensor inference or the creation of power-conscious sources that allow us to estimate the consumption of different tasks and trajectories, even before executing them.
Aimed for researchers with interest in exploring how Deep Learning can empower robots. This layer provides a variety of techniques (mainly for Reinforcement Learning and for Supervised Learning) built on top basic primitives powered by TensorFlow. All these techniques connect with the underlying layer that interoperates with ROS. A roboticists' approach to AI.
User-oriented, this layer aims to provide a simple yet complete set of functions to facilitate the use of robots. We research how AI can be used to enhance traditional path planning techniques and how robots can learn a given task through imitation.
We propose a novel framework for Deep Reinforcement Learning (DRL) in modular robotics that provides an approach which trains a robot directly from joint states, using traditional robotic tools. We use an state-of-the-art implementation of the Proximal Policy Optimization, Trust Region Policy Optimization and Actor-Critic Kronecker-Factored Trust Region algorithms to learn policies in four different Modular Articulated Robotic Arm (MARA) environments. We support this process using a framework that communicates with typical tools used in robotics, such as Gazebo and Robot Operating System 2 (ROS 2). We compare the robustness of the performance of such methods in modular robots with an empirical study in simulation.Read more
This paper presents an upgraded, real world application oriented version of gym-gazebo, ROS and Gazebo based Reinforcement Learning (RL) toolkit, which complies with OpenAI Gym. The content discusses the new ROS 2 based software architecture and summarizes the results obtained using Proximal Policy Optimization (PPO). We have evaluated environments with different levels of complexity of MARA, reaching accuracies in the millimeter scale. The converged results show the feasibility and usefulness of the gym-gazebo 2 toolkit, its potential and applicability in industrial use cases, using modular robots.Read more
We present a framework to accelerate robot training through simulation in the cloud that makes use of roboticists' tools, simplifying the development and deployment processes on real robots. We demonstrate that, for simple tasks, this framework accelerates the robot training time by more than 33%, while maintaining similar levels of accuracy and repeatability.Read more
We present the concept of a self-adaptable robot that makes use of hardware modularity and AI techniques to reduce the effort and time required to be built. And demonstrate - both with simulation and a real robot - how training, rather than programming, produces behaviors in the robot that generalize fast and produce robust outputs in the presence of noise.Read more
We introduce a training method that allows a robot to simultaneously learn multiple tasks and offer a demonstration on how this technique generalizes to robots with different configurations and tasks.Read more
We offer a novel framework for Deep Reinforcement Learning (DRL) in modular robotics and describe a new technique to transfer these DLR methods into the real robot, aiming to close the simulation-reality gap.Read more
We propose an extension of the OpenAI Gym for robotics using ROS and the Gazebo simulator. We also introduce a benchmarking system for robotics that allows different techniques and algorithms to be compared using the same virtual conditions.Read more