Soutenance de these Abir BOUAOUDA

Quand

13 décembre 2024    
10h00 - 12h30

Type d’évènement

Titre de la these : Reinforcement learning for controlling cable driven parallel robots.

Rapporteurs :
Jacques GANGLOFF : Université de Strasbourg – ICUBE
Abdel-Illah MOUADDIB : Université de Caen – CREYC

Examinateurs :
Ouiddad LABBANI-IGBIDA : Université de Limoges – XLIM
Laeticia MATIGNON : Université Claude Bernard Lyon 1 – Polytech / LIRIS

Directeurs – Co-encadrants de thèse :
Dominique MARTINEZ : Université Aix – Marseille – ISM
Mohamed BOUTAYEB : Université de Lorraine – CRAN
Rémi PANNEQUIN : Université de Lorraine – CRAN
François CHARPILLET : INRIA – Nancy

Abstract: In this thesis, we present a novel reinforcement learning-based control strategy designed specifically for cable-driven parallel robots. The development of this controller begins with the derivation of the dynamic model of the robot available in our laboratory. This dynamic model serves as a testing platform for training the controller, with validation being conducted against real data obtained using the existing PID-Type controller. Given our focus on addressing the trajectory tracking problem, we devised a methodology for generating training trajectories that ensure comprehensive coverage of all robot states. Subsequently, we designed a reward function that factors in considerations such as tracking error and other relevant metrics. One of the key challenges we encountered pertained to managing the action space – maintaining cable tension while operating the end effector. Since our action space is defined by motor speed, direct limitation of cable tension was not feasible. Thus, we formulated a strategy to calculate the current at each step and verify that it falls within predefined limits. Employing this setup in conjunction with prominent reinforcement learning algorithms tailored for continuous spaces such as DDPG, PPO, and SAC, we proceeded to train our controller. A comprehensive comparison was conducted among the three algorithms across various performance metrics. Given the extensive training duration, we introduced a novel approach leveraging our prior knowledge of the robot to streamline the training process. The controller underwent testing on the physical robot, yielding results that demonstrate its capability to achieve precise trajectory tracking. Subsequently, we contrasted the outcomes obtained from the three reinforcement learning algorithms with those derived from a PID-based controller, analyzing factors including tracking error, energy efficiency, and robustness. This comparative evaluation provided insights into the strengths and weaknesses of each approach, shedding light on the efficacy of the proposed reinforcement learning-based control strategy for cable-driven parallel robots.

Laisser un commentaire