4774 / Steadily Learn to Drive with Virtual Memory
Paper presented at the 11th Asia-Pacific Regional Conference of the ISTVS
https://doi.org/10.56884/ONWH8454
Title: Steadily Learn to Drive with Virtual Memory
Authors: Yuhang Zhang, Yao Mu, Shengbo Li, Yangang Ren, Liye Tang, Yujie Yang, and Chen Chen
Abstract: Reinforcement learning has shown great potential in developing high-level autonomous driving systems. However, for high-dimensional tasks, current RL methods suffer from low data efficiency and oscillation in the training process. This paper proposes an algorithm called Learn to drive with Virtual Memory (LVM) to overcome these problems. LVM compresses the high-dimensional information into compact latent states and learns a latent dynamic model to summarize the agent's experience. Various imagined latent trajectories are generated as virtual memory by the latent dynamic model. The policy is learned by propagating gradient through the learned latent model with the imagined latent trajectories and thus leads to high data efficiency. Furthermore, a double critic structure is designed to reduce the oscillation during the training process. The effectiveness of LVM is demonstrated by an image-input autonomous driving task, in which LVM outperforms the existing method in terms of data efficiency, learning stability, and control performance.
Order the full paper: https://www.istvs.org/proceedings-orders/paper
ISTVS members: receive three papers per year as part of your membership via the ISTVS Member Portal: https://istvs.knack.com/member-portal/
Last updated