MuProp: Unbiased Backpropagation for Stochastic Neural Networks

Abstract:

Deep neural networks are powerful parametric models that can be trained efficiently using the backpropagation algorithm. Stochastic neural networks combine the power of large parametric functions with that of graphical models, which makes it possible to learn very complex distributions. However, as backpropagation is not directly applicable to stochastic networks that include discrete sampling operations within their computational graph, training such networks remains difficult. We present MuProp, an unbiased gradient estimator for stochastic networks, designed to make this task easier. MuProp improves on the likelihood-ratio estimator by reducing its variance using a control variate based on the first-order Taylor expansion of a mean-field network. Crucially, unlike prior attempts at using backpropagation for training stochastic networks, the resulting estimator is unbiased and well behaved. Our experiments on structured output prediction and discrete latent variable modeling demonstrate that MuProp yields consistently good performance across a range of difficult tasks.

Full text: http://arxiv.org/pdf/1511.05176v2.pdf

Learning to Compose Neural Networks for Question Answering

Abstract:

We describe a question answering model that applies to both images and structured knowledge bases.
The model uses natural language strings to automatically assemble neural networks from a collection of composable modules.
Parameters for these modules are learned jointly with network-assembly parameters via reinforcement learning,
with only (world, question, answer) triples as supervision. Our approach, which we term a dynamic neural module network,
achieves state-of-the-art results on benchmark datasets in both visual and structured domains.

Full text: http://arxiv.org/pdf/1601.01705.pdf

Mastering the game of Go with deep neural networks and tree search

Abstract:

The game of Go has long been viewed as the most challenging of classic games for artificial intelligence owing to its enormous search space and the difficulty of evaluating board positions and moves. Here we introduce a new approach to computer Go that uses ‘value networks’ to evaluate board positions and ‘policy networks’ to select moves. These deep neural networks are trained by a novel combination of supervised learning from human expert games, and reinforcement learning from games of self-play. Without any lookahead search, the neural networks play Go at the level of state-of-the-art Monte Carlo tree search programs that simulate thousands of random games of self-play. We also introduce a new search algorithm that combines Monte Carlo simulation with value and policy networks. Using this search algorithm, our program AlphaGo achieved a 99.8% winning rate against other Go programs, and defeated the human European Go champion by 5 games to 0. This is the first time that a computer program has defeated a human professional player in the full-sized game of Go, a feat previously thought to be at least a decade away.

Full text: http://www.nature.com/nature/journal/v529/n7587/full/nature16961.html