Imitation Learning for Autonomous Racing

Autonomous racing with scaled race cars has gained increasing attention as an effective approach for developing perception, planning and control algorithms for safe autonomous driving at the limits...

Autonomous racing with scaled race cars has gained increasing attention as an effective approach for developing perception, planning and control algorithms for safe autonomous driving at the limits of the vehicle’s handling. To train agile control policies for autonomous racing, learning-based approaches largely utilize reinforcement learning, albeit with mixed results. In this study, we benchmark a variety of imitation learning policies for racing vehicles that are applied directly or for bootstrapping reinforcement learning both in simulation and on scaled real-world environments. We show that interactive imitation learning techniques outperform traditional imitation learning methods and can greatly improve the performance of reinforcement learning policies by bootstrapping thanks to its better sample efficiency. Our benchmarks provide a foundation for future research on autonomous racing using Imitation Learning and Reinforcement Learning.

This paper has been submitted to 2022 IROS Workshop on Miniature Robot Platforms for Full Scale Autonomous Vehicle Research. The paper can be found here.

Here is a video of policy trained by combining HG-DAGGER and PPO working in real-world environment:

Here is a FPV video of the policy:

Here are the slides of my presentation on 2022 IROS MiniRobot Workshop:

Here is the video recording of my presentation on 2022 IROS MiniRobot Workshop: