“Thinking Fast and Slow With Deep Learning and Tree Search”, Thomas Anthony, Zheng Tian, David Barber2017-05-23 (; similar)⁠:

Sequential decision making problems, such as structured prediction, robotic control, and game playing, require a combination of planning policies and generalization of those plans. In this paper, we present Expert Iteration (ExIt), a novel reinforcement learning algorithm which decomposes the problem into separate planning and generalization tasks.

Planning new policies is performed by tree search, while a deep neural network generalizes those plans. Subsequently, tree search is improved by using the neural network policy to guide search, increasing the strength of new plans. In contrast, standard deep Reinforcement Learning algorithms rely on a neural network not only to generalize plans, but to discover them too.

We show that ExIt outperforms REINFORCE for training a neural network to play the board game Hex, and our final tree search agent, trained tabula rasa, defeats MoHex 1.0, the most recent Olympiad Champion player to be publicly released.