1 / 42

Plasticity and learning

Plasticity and learning. Dayan and Abbot Chapter 8. Introduction. Learning occurs through synaptic plasticity Hebb (1949): If neuron A often contributes to the firing of neuron B, then the synapse from A to B should be strengthened Stimulus response (Pavlov)

tracen
Download Presentation

Plasticity and learning

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Plasticity and learning Dayan and Abbot Chapter 8

  2. Introduction • Learning occurs through synaptic plasticity • Hebb (1949): If neuron A often contributes to the firing of neuron B, then the synapse from A to B should be strengthened • Stimulus response (Pavlov) • Converse: If neuron A does not contribute to the firing of B, the synapse is weakened • Hippocampus, neocortex, cerebellum

  3. LTP and LTD at Shaffer collateral inputs to CA1 region of rat hippocampal slice. High stimulation yields LTP. Low stimulation yields LTD. NB: no stimulation yields no LTD

  4. Function of learning • Unsupervised learning (ch 10) • Feature selection, receptive fields, density estimation • Supervised learning (ch 7) • Input-output mapping, feedback as teacher signal • Reinforcement learning (ch 9) • Feedback in terms of reward, similar to control theory • Hebbian learning (ch 8) • Biologically plausible + normalization • Covariance rule for (un)supervised learning • Occular dominance, maps

  5. Rate model with fast time scale • Neural activity as continuous rate, not spike train V is output neuron, u is vector input neurons, w is vector of weights If tau_r small wrt learning time:

  6. Basic Hebb rule • V and u are functions of time. Makes dynamics hard to solve. Alternative is to assume v,u from distribution p(v,u) and assume p time independent. Using v=w. u we get

  7. Basis Hebb rule • Hebb rule is unstable, because norm always increases • Continuous differential equation can be simulated using Euler scheme

  8. Covariance rule • Basic Hebb rule describes only LTP since u, v positive • LTD occurs when pre-synaptic activity co-occurs with low post synaptic activity • Alternatively,

  9. Covariance rule • When Either rule produces Covariance rule is unstable

  10. BCM rule • Bienenstock, Munro, Cooper (1982): Requires both pre and post synaptic activity for learning For fixed threshold, the BMC rule is also unstable

  11. BMC rule • BMC rule can be made stable by threshold dynamics • Tau_theta is smaller than tau_w • BMC rule implements competition between synapses • Strenghtening one synapse, increases the threshold, makes strengthening of other synapses more difficult • Such competition can also be implemented by normalization

  12. Synaptic normalization • Limit the sum of weights or sum of squared weights • Impose this constraint rigidly, or dynamically • Two examples: • Rigid scheme for sum of weights constraint (subtractive norm.) • Dynamic scheme for sum of squared weights (multipl. Norm.)

  13. Subtractive normalization • Subtractive normalization ensures that sum w does not change Not clear how to implement this rule biophysically (non-locality). We must add a constraint that weights are non-negative.

  14. Multiplicative normalization • Oja rule (1982) • The rule implements the constraint dynamically:

  15. Unsupervised learning • Adapting the network for a set of tasks • Neural selectivity, receptive field • Cortical map • Process depends partly on neural activity and partly not (axon growth) • Ocular dominance • Adult neurons favor one eye over the other (layer 4 input from LGN) • Neurons are clustered in bands or stripes

  16. Single post-synaptic neuron • We analyze Eq. 8.5

  17. Single post-synaptic neuron • Solution in terms of eigenvalues of Q • Eigen values are positive, so solution explodes. • Asymptotically • e1 is the principle eigen direction • Neuron projects input onto this direction:

  18. Single post-synaptic neuron • Example with two weights. • Weights grow indefinite, one positive one negative. Choice depends on initial conditions. • Limit to [0, 1] yields different solutions depending in init value

  19. Single post-synaptic neuron • Subtractive normalization. Averaging over inputs: • Analysis in terms of eigenvectors: • In ocular dominance e1=n/sqrt(n). W in direction of e1 has rhs equal to zero. Ie this component of w is unaltered • In other directions normalizing term is zero • W asymptotically dominated by second eigenvector

  20. Hebbian development of ocular dominance Subtractive normalization may solve this, since e1=n weight grows proportional to e2=(1,-1)

  21. Single post-synaptic neuron • Using the Oja rule • Show that each eigenvector of Q is solution. • One can show that only principal eigenvector is stable.

  22. Single post-synaptic neuron • A: Behavior of • Unnormalized Hebbian learning • Multiplicative normalization (Oja rule) gives w propto e1. This is similar to PCA. • B: Shifting mean of u may yield different solution • C: Covariance based learning corrects for mean • Saturation constraints may alter this conclusion

  23. Hebbian development of ocular dominance • Model layer 4 cell with input from two LGN cells, each associated with different eye.

  24. Hebbian development of orientation selectivity Spectral analysis also applicable to non-linear systems Dominant eigenvector uniform. Non-uniform receptive fields result from sub-dominant eigenvector Cortical receptive fields from LGN. ON-center (white) and OFF-center (black) cells excite cortical neuron.

  25. Multiple postsynaptic neurons

  26. Hebbian development of ocular dominance stripes • A: model with right and left eye inputs drive array of cortical neurons • B: ocular dominance maps. Top: light and dark areas in top and bottom cortical layer show ocular dominance in cat primary cortex. Bottom: model of 512 neurons with Hebbian learning

  27. Hebbian development of ocular dominance stripes • Use 8.31 with W=(w+,w-) the n*2 matrix. See book. • Subtractive normalization dw+/dt=0 • Ocular dominance pattern given by largest eig. vector of K

  28. Hebbian development of ocular dominance stripes • Suppose K translation invariant • Periodic boundary conditions simulating a patch of cortex ignoring boundary effects • Eigenvectors are • Eigenvalues are Fourrier components • Solution of learning is spatially periodic (viz. fig 8.7)

  29. Feature based models • Multi dimensional input (retinal location, ocular dominance, orientation preference, ....) • Replace input neurons by input features, W_ab is selectivity of neuron a to feature b • Feature u1 is location on retina in coordinates • Feature u2 is ocularity (how much is the stimulus prefering left over right eye), a single number • The coupling to neuron a describe the preferred stimulus • Activity of output a is

  30. Feature based models • Output is soft-max • Combined with lateral averaging • Self-organizing map (SOM) • Elastic net

  31. Feature based models optical imaging shows ocularity and orientation selectivity in macaque primary visual cortex. Dark lines are ocular dominance boundaries, light lines are iso-orientation contours. Pin wheel singularities, linear zones

  32. Feature based models • Elastic net output • SOM, competitive Hebbian rules can produce similar output

  33. Anti-hebbian modification • Another way to make different outputs specialize is by adaptive anti-Hebbian modification • Consider Oja rule: • Each output a will be identical • Anti-Hebbian modification is shown at synapses from parallel fibers to Purkinje cells in cerebellum. • Combination yields different eigenvectors as outputs

  34. Timing based rules • Left: in vitro cortical slice. Right: in vivo xenopus tadpoles • LTP when pre-synaptic spike precedes post-synaptic spike • LTD when pre-synaptic spike follows post-synaptic spike

  35. Timing based rules • Simulating spike-time plasticity requires spiking neurons • Approximate description with firing rates • H(t) positive/negative for t positive/negative

  36. Timing based plasticity and prediction • Consider array of neurons labeled by a with receptive fields f_a(s) (dashed and solid curves) • Timing based learning rule. Stimulus s moves from left to right.

  37. Timing based plasticity and prediction • If a left of b, then link a to b is strengthened and link b to a is weakened. Receptive field of neuron a is asymmetrically deformed (A solid bold line) • Prediction: next presentation of s(t) will activate a earlier. • In agreement with shift of place field mean when rats run around track (B).

  38. Supervised Hebbian learning • Weight decay: • Asymptotic solution is

  39. Classification and the Perceptron • If output values are +/- 1 the model implements a classifier, called the Perceptron: • The weight vector defines a separating hyper plane: = gamma. • The perceptron can solve problems that are ‘linearly separable’

More Related