“On the Optimization of a Synaptic Learning Rule”, Samy Bengio, Yoshua Bengio, Jocelyn Cloutier, Jan Gecsei1997 (; backlinks)⁠:

This paper presents a new approach to neural modeling based on the idea of using an automated method to optimize the parameters of a synaptic learning rule.

The synaptic modification rule is considered as a parametric function. This function has local inputs and is the same in many neurons. We can use standard optimization methods [such as gradient descent, genetic algorithms, and simulated annealing] to select appropriate parameters for a given type of task. We also present a theoretical analysis permitting to study the generalization property of such parametric learning rules. By generalization, we mean the possibility for the learning rule to learn to solve new tasks.

Experiments were performed on 3 types of problems: a biologically inspired circuit (for conditioning in Aplysia), Boolean functions (linearly-separable as well as non-linearly separable) and classification tasks.

The neural network architecture as well as the form and initial parameter values of the synaptic learning function can be designed using a priori knowledge.

…Because the domain of possible learning algorithms is large, we propose to constrain it by using in Equation 1 only already known, biologically-plausible synaptic mechanisms. Hence, we consider only local variables, such as presynaptic activity, post-synaptic potential, synaptic strength, the activity of a facilitatory neuron, and the concentration of a diffusely acting neuromodulator. Figure 1 shows the interaction between those elements. Constraining the learning rule to be biologically plausible should not be seen as an artificial constraint but rather as a way to restrain the search space such that it is consistent with solutions that we believe to be used in the brain. This constraint might ease the search for new learning rules (Figure 2).