āConditional Computation in Neural Networks for Faster Modelsā, 2015-11-19 ()ā :
Deep learning has become the state-of-art tool in many applications, but the evaluation and training of deep models can be time-consuming and computationally expensive. The conditional computation approach has been proposed to tackle this problem (Bengio et al 201311ya; 2013). It operates by selectively activating only parts of the network at a time.
In this paper, we use reinforcement learning as a tool to optimize conditional computation policies. More specifically, we cast the problem of learning activation-dependent policies for dropping out blocks of units as a reinforcement learning problem. We propose a learning scheme motivated by computation speed, capturing the idea of wanting to have parsimonious activations while maintaining prediction accuracy.
We apply a policy gradient algorithm for learning policies that optimize this loss function and propose a regularization mechanism that encourages diversification of the dropout policy.
We present encouraging empirical results showing that this approach improves the speed of computation without impacting the quality of the approximation.