Impedance learning-based adaptive force tracking for robot on unknown terrains copy

Image credit: Li Zheng

Abstract

Aiming at the robust force tracking challenge for robots in continuous contact with uncertain environments, a novel adaptive variable impedance control policy based on deep reinforcement learning (DRL) is proposed in this article. The policy includes a neural network feedforward controller and a variable impedance feedback controller. Based on the DRL algorithm, the iterative network feedforward controller explores and prelearns the optimal policy for impedance tuning in simulation scenarios with randomly generated terrain. The converged results are then used as feedforward inputs in the variable impedance feedback controller to improve the force-tracking performance of the robot during contact. A simplified dynamic contact model between the robot and the uncertain environment called the “couch model,” which satisfies the Lipschiz continuity condition, is developed to provide boundary conditions for the safe transfer of capabilities learned in simulation to real robots. Unlike the exhaustive example that relies on the completeness of the learning samples, this article gives theoretical proofs of the stability and convergence of the proposed control policy via Lyapunov’s theorem and contraction mapping principle. The control method proposed in this article is more interpretable and shows higher sample utilization efficiency and generalization ability in simulations and experiments.

Publication
IEEE Transactions on Robotics