1 / 28

PSY105 Neural Networks 4/5

PSY105 Neural Networks 4/5. 4. “Traces in time” Assignment note: you don't need to read the full book to answer the first half of the question. You should be able to answer it based on chapter 1 and the lecture notes. Lecture 1 recap.

xenos
Download Presentation

PSY105 Neural Networks 4/5

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. PSY105 Neural Networks 4/5 4. “Traces in time” Assignment note: you don't need to read the full book to answer the first half of the question. You should be able to answer it based on chapter 1 and the lecture notes.

  2. Lecture 1 recap • We can describe patterns at one level of description that emerge due to rules followed at a lower level of description. • Neural network modellers hope that we can understand behaviour by creating models of networks of artificial neurons.

  3. Lecture 2 recap • Simple model neurons • Transmit a signal of (or between) 0 and 1 • Receive information from other neurons • Weight this information • Can be used to perform any computation

  4. Lecture 3 recap • Classical conditioning is a simple form of learning which can be understood as an increase in the weight (‘associative strength’) between two stimuli (one of which is associated with an ‘unconditioned response’)

  5. Nota Bene • Our discussion of classical conditioning has involved • A behaviour: learning to associate a response with a stimulus • A mechanism: neurons which transmit signals • These are related by…a rule or algorithm

  6. Learning Rules “When an axon of cell A is near enough to excite a cell B and repeatedly or persistently takes part in firing it, some growth process or metabolic change takes place in one or both cells such that A's efficiency, as one of the cells firing B, is increased.” Hebb, D.O. (1949), The organization of behavior, New York: Wiley

  7. Operationalising the Hebb Rule • Turn ….“When an axon of cell A is near enough to excite a cell B and repeatedly or persistently takes part in firing it, some growth process or metabolic change takes place in one or both cells such that A's efficiency, as one of the cells firing B, is increased.” • ….Into a simple equation which is a rule for changing weights according to inputs and outputs

  8. A Hebb Rule • Δ weight = activity A x activity B x learning rate constant • In words: increase the weight in proportion to the activity of neuron A multiplied by the activity of neuron B

  9. Stimulus Off Stimulus On Time

  10. Implications of this rule CS1 Stimulus 1 ? UCS Stimulus 2

  11. Implications of this rule CS1 Stimulus 1 ? UCS Stimulus 2

  12. Implications of this rule CS1 Stimulus 1 ? weight UCS Stimulus 2

  13. Implications of this rule Activity A CS1 Stimulus 1 ? weight Activity B UCS Stimulus 2

  14. Implications of this rule Activity A CS1 Stimulus 1 ? weight Activity B UCS Stimulus 2 x x 0.1 =

  15. The most successful model of Classical Conditioning is the RescorlaWagnar model Accounts for the effects of combinations of stimuli in learning S-S links Based on the discrepancy between what is expected to happen and what happens But… Deals with discrete trials…ie has no model of time Rescorla RA, Wagner AR. A theory of Pavlovian conditioning: Variations in the effectiveness of reinforcement and nonreinforcement. In: Classical Conditioning II: Current Research and Theory (Eds Black AH, Prokasy WF) New York: Appleton Century Crofts, pp. 64-99, 1972 Robert Rescorla (2008) Rescorla-Wagner model. Scholarpedia, 3(3):2237.

  16. The problem of continuous time Stimulus 1 Stimulus 2

  17. The problem of continuous time Stimulus 1 Stimulus 2

  18. The problem of continuous time Stimulus 1 Stimulus 2

  19. The problem of continuous time Activity A Stimulus 1 Activity B = 0 Stimulus 2

  20. The problem of continuous time Activity A Stimulus 1 Activity B = 0 Stimulus 2

  21. The problem of continuous time Activity A = 0 Stimulus 1 Activity B Stimulus 2

  22. We need to add something to our model to deal with a learning mechanism that is always “on”

  23. Traces Stimulus 1

  24. Traces Stimulus 1

  25. Traces Stimulus 1 Stimulus 2

  26. Traces Activity A Stimulus 1 Activity B Stimulus 2

  27. Traces Stimulus 1 Stimulus 2

  28. Consequences of this implementation • Size of CS stimulus • Duration of CS stimulus • Size of UCS stimulus • Duration of UCS stimulus • Separation in time of CS and UCS • The order in which the CS and UCS occur • (cf. Rescola-Wagner discrete time model)

More Related