Explainable Deep Reinforcement Learning for UAV autonomous path planning

UAV autonomous navigation

Abstract

Autonomous navigation in unknown environment is still a hard problem for small Unmanned Aerial Vehi-cles (UAVs). Recently, some neural network-based methods are proposed to tackle this problem, however, the trained network is opaque, non-intuitive and difficult for people to understand, which limits the real-world application. In this paper, a novel explainable deep neural network-based path planner is pro-posed for quadrotor to fly autonomously in unknown environment. The navigation problem is modelledas a Markov Decision Process (MDP) and the path planner is trained using Deep Reinforcement Learn-ing (DRL) method in simulation environment. To get better understanding of the trained model, a novel model explanation method is proposed based on the feature attribution. Some easy-to-interpret textual and visual explanations are generated to allow end-users to understand what triggered a particular be-haviour. Moreover, some global analyses are provided for experts to evaluate and improve the trained network. Finally, real-world flight tests are conducted to illustrate that our path planner trained in the simulation is robust enough to be applied in the real environment directly.

Publication
Aerospace Science and Technology
Click the Cite button above to demo the feature to enable visitors to import publication metadata into their reference management software.
Create your slides in Markdown - click the Slides button to check out the example.

Supplementary notes can be added here, including code, math, and images.

Related