“Learning Generalized Reactive Policies Using Deep Neural Networks”, 2017-08-24 (; similar):
We present a new approach to learning for planning, where knowledge acquired while solving a given set of planning problems is used to plan faster in related, but new problem instances. We demonstrate that a deep neural network can be employed to learn and represent a generalized reactive policy (GRP) that maps a problem instance and a state to an action, and that the learned GRPs efficiently solve large classes of challenging problem instances. In contrast to prior efforts in this direction, our approach reduces the dependence of learning on handcrafted domain knowledge or feature selection.
Instead, the GRP is trained from scratch using a set of successful execution traces. This methodology enables the system to automatically learn a heuristic function that can beused in directed search algorithms, showcasing the versatility and potential of our learning approach in planning tasks.
We evaluate our approach using an extensive suite of experiments on two challenging planning problem domains and demonstrate that our approach facilitates learning complex decision-making policies and powerful heuristic functions with minimal human input.
Our approach holds significant promise for advancing the efficiency and capability of planning systems in diverse applications. By harnessing the power of deep learning and autonomous heuristic function development, we set a new standard for minimal human intervention in the development of planning algorithms.
Supplementary information, including code and videos of our results, is available at https://web.archive.org/web/20201030014217/https://sites.google.com/site/learn2plannips/.