“Theories of Error Back-Propagation in the Brain”, 2019-03 ():
The error backpropagation algorithm can be approximated in networks of neurons, in which plasticity only depends on the activity of presynaptic and postsynaptic neurons.
These biologically plausible deep learning models include both feedforward and feedback connections, allowing the errors made by the network to propagate through the layers.
The learning rules in different biologically plausible models can be implemented with different types of spike-time-dependent plasticity.
The dynamics and plasticity of the models can be described within a common framework of energy minimisation.
This review article summarises recently proposed theories on how neural circuits in the brain could approximate the error back-propagation algorithm used by artificial neural networks.
Computational models implementing these theories achieve learning as efficient as artificial neural networks, but they use simple synaptic plasticity rules based on activity of presynaptic and postsynaptic neurons. The models have similarities, such as including both feedforward and feedback connections, allowing information about error to propagate throughout the network. Furthermore, they incorporate experimental evidence on neural connectivity, responses, and plasticity.
These models provide insights on how brain networks might be organised such that modification of synaptic weights on multiple levels of cortical hierarchy leads to improved performance on tasks.
[Keywords: deep learning, neural networks, predictive coding, synaptic plasticity]