1 / 58

From Biological to Artificial Neural Networks

From Biological to Artificial Neural Networks. NPCDS/MITACS Spring School, Montreal, May 23-27, 2006 Helmut Kroger Laval University . Outline 1. From biology to artifial NNs 2. Perceptron – unsupervised learning 3. Hopfield model – associative memory 4. Kohonen map – self organization.

kassidy
Download Presentation

From Biological to Artificial Neural Networks

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. FromBiologicaltoArtificialNeuralNetworks NPCDS/MITACS Spring School, Montreal, May 23-27, 2006 Helmut Kroger Laval University

  2. Outline 1. From biology to artifial NNs 2. Perceptron – unsupervised learning 3. Hopfield model – associative memory 4. Kohonen map – self organization

  3. References Hertz J., Krogh A., and Palmer R.G. Introduction to the theory of neural computation Pattern Classification, John Wiley, 2001R.O. Duda and P.E. Hart and D.G. Stork Haykin S (1999). Neural networks. Prentice Hall International. Bishop C (1995). Neural networks for pattern recognition. Oxford: Clarendon Press

  4. Pattern Recognition and Neural Networksby Brian D. Ripley. Cambridge University Press. Jan 1996. Neural Networks. An Introduction, Springer-Verlag Berlin, 1991 B. Mueller and J. Reinhardt W.S. McCulloch & W. Pitts (1943). “A logical calculus of the ideas immanent in nervous activity”, Bulletin of Mathematical Biophysics, 5, 115-137.

  5. Use of NNs:Neural Networks Are For Applications Science Character recognition Neuroscience Optimization Physics, Mathematics Data mining Computer science … …

  6. Biological neural networks Nerve cells are called neurons. Many different types exist. Neurons are extremely complex. • Approx. 1011 neurons in the brain. Each neuron has about 103 connections

  7. Neural communication Neurons transport information via electrical action potentials. At the synapse the transmission is mediated by chemical macromolecules (neurotransmitter proteins).

  8. How two neurons communicate

  9. Biology of neurons: • Single neurons are highly complex electrochemical devices • Many forms of interneuron communication now known – acting over many different spatial and temporal scales • Local • Gaseous • Volume signaling etc.

  10. Theneuronis a computer: Hillock input Output

  11. Frombiologyofbraintoinformationprocessing. • (1) Information is distributed. • (2) Brain works in deterministic mode as well as in stochastic mode (generation of nerve signals, synaptic transmission). • (3) Associative memory: Retrieval not by checking bit by bit, but by gradual reproduction of the whole.

  12. (4) Architecture of brain: - optimizes information transmission. - stable against errors - physical constraints: oxygen, blood, energy, cooling. - Small World Architecture. • (5) 1/f frequency scaling in EEG: fractals, self-similarity. Dynamical origin: Model of self-organized criticality (sand pile analogue). Neural avalanches – do they transmit information? Does the brain work at some critical point? • (6) How is neural connectivity generated? Nerve growth: random process, guided by chemical markers, enhanced by stochastic resonance.

  13. Layered structure of neocortex

  14. Organization of neo-cortex

  15. Biological parameters of neocortex

  16. Artificial Neural Networks A network with interactions: An attempt to mimic the brain: • Unit elements are artificial neurons (linear or nonlinear input-output unit). • Communications are encoded by weights, measure of how strong neurons affect each other. • Architectures can be feed-forward, feedback or recurrent.

  17. Firingof agroupofneurons: biologyvsmodel x1 w1: synaptic strength wn xn

  18. 1. Multi-layerfeedforwardnetwork

  19. Biological motivation for multi-layer feed-forward network: Modeled after visual cortex being muli-layered (6 layers). Dominantly feedforward. There is strong lateral inhibition: Neurons in the same layer don’t talk to each other. High connectivity between neurons (10^4 per neuron) provides basis for massive parallel computing. High redundance against errors.

  20. The principal building blocks: • Input vector x_k • Weight matrix w_ik • Activation function f • Output vector y_i • Dynamical update rule: • Goal: Find weights and activation function such that for given input the output is close to desired output • (Training supervised by teacher)

  21. Examples of activation functions

  22. Truth Table for Logical AND x y x & y 0 0 0 1 1 0 1 1 output inputs The Perceptron (1962): A simplefeed-forwardnetwork:1inputlayer1outputlayer. • The simplest possible artificial neural network able to do a calculation consists from two inputs and one output. It can be used to classify patterns or to perform basic logical operations such as AND, OR and NOT. x 1 x+y-1.5 y 1 (x+y-1.5) 0 1.5 0 0 -1 sum output 1 weights inputs

  23. Σxi wi sum weights inputs * output Perceptron learning algorithm Training any unit consists of adjusting the weight vector and threshold so that desired classification is performed. Algorithm (inspired by Hebb’s rule): For each training vector x evaluate the output y. Take the difference desired output t – current output y. wi=wi+wi. wi=(t-y)xi. Repeat until y=t. Rule: If the sum of the weighted inputs exceeds a threshold, output 1, else output -1. 1 if Σ inputi * weighti > threshold -1 if Σ inputi * weighti < threshold

  24. 0 0 0 1 Interpretation of weights Heaviside function has threshold at x=0. Decision boundary given by: a = w*x+ w0 = w0+ w1 x1 + w2 x2 = 0 Thus: x2 = - (w0 + w1 x1)/w2 . 1.5 1.5

  25. Look-up table: OR fct, XOR fct

  26. Linear discriminant function: separation of 2 classes A linear discriminant function is a mapping which partitions feature space using a linear function (straight line, or hyperplane) D=2 dimensions: decision boundary is straight line Simple form of classifier: “separate two classes using a straight line in feature space”

  27. w11 x1 Separation of K classes y1 wk1 wkd xd yk wk0 -1 • Perceptron can be used to discriminate between k classes by having k output nodes: • x is in class Cj if yj (x)>= yk for all k • Resulting decision boundaries divide the feature space into convex decision regions Weight to output j from input k is wjk yj = g(Swjk xk + wk0) C1 C2 C3

  28. Network training via gradient descend Set of training data from known classes used in conjunction with an error function E(w) (eg. squared difference between target t and response y) which must be minimized. Then: w new = w old - E(w) where: E(w) is a vector representing the gradient and is the learning rate (small, positive) 1. Move downhill in direction E(w) (steepest downhill since E(w) is the direction of steepest increase) 2. Termination is controlled by 

  29. Moving along the error function landscape • Equivalent to climbing hill up and down • Problem: when to stop? • Local minima • Possibility of multiple local minima. Note: for single-layer perceptron, E(w) only has a single global minimum - no problem! • Gradient descent goes to the closest local minimum: • General solution: random restarts from multiple places in weight space (simulated annealing).

  30. Training multi-layer NN via back propagation algorithm Back-Propagation

  31. Back propagation cont’d

  32. Perceptron as Classifier For d-dimensional data perceptron consists of d-weights, a bias and a thresholding activation function. For 2D data we have: x1 w1 w2 x2 a = w0 + w1 x1 + w2 x2 y=g(a) {-1, +1} Output = class decision 1 w0 1. Weighted Sum of the inputs 2. Pass thru Heaviside function: T(a)= -1 if a < 0 T(a)= 1 if a >= 0 View the bias as another weight from an input which is constantly on If we group the weights as a vector w we therefore have the net output y given by: y = g(w . x + w0)

  33. Successful Footballers Academics Few Hours in the Gym per Week Many Hours in the Gym per Week Unsuccessful Failure of the Perceptron …despite the simplicity of their relationship: Academics = Successful XOR Gym In this example, a perceptron would not be able to discriminate between the footballers and the academics (XOR cannot be represented by single theshold sigma node). This failure caused the majority of researchers to walk away.

  34. Classification: Decision Trees Color = dark Color = light #nuclei=1 #nuclei=2 #nuclei=1 #nuclei=2 cancerous healthy #tails=1 #tails=2 #tails=1 #tails=2 healthy cancerous healthy cancerous

  35. Example of Classification from Neural Networks: “Which factors determine if a cell is cancerous?” Color = dark # nuclei = 1 … # tails = 2 Healthy Cancerous

  36. Problem: Over-fitting

  37. FeedforwardNN . LetterrecognitionfromwritingonPCscreen. Letterscanner – transformationintoASCIIcode 9 15

  38. SWN topology: Network of 5 Layers by 8 Neurons. Simulation with 5 neurons per layers and 8 layers. NN was trained with 40 patterns for 50 different runs. Learning 40 patterns. The regular network almost fails to learn. With a few short-cuts the network learns well. The SWN architecture is better than regular and random random architecture.

  39. 2. Hopfield NN:Model of associative memory

  40. Example: Restoring corrupted memory patterns 20% of T corrupted Half is Corrupted Original T Use in search machines (Alt Vista): Search from incomplete or corrupted items.

  41. Approximate solution of combinatorial optimization problem: Travelling Salesman Problem

  42. Associative memory problem: Store a set of patterns When presenting pattern z, network finds pattern , closest to pattern z.

  43. Hopfield Model • Dynamical variables: Neuron i ( i=1,..,N) represented by variable • At any time t network determined by state vector • { i=1…N}. • Dynamical update rule: • System evolves from a given state to some stable network state (attractor state, fixed point).

  44. Hopfield Nets • Every node is connected to every other node. • Weights are symmetric and wi,j=0. • The flow of information is not unidirectional. • The state of the system is given by the node output.

  45. Energy function of Hopfield net: multidimensional landscape

  46. Training of Hopfield Nets: Inspired by biological Hebb’s rule(unsupervised) • Present components of the patterns to be stored at the outputs of corresponding nodes of the net • If two nodes have the same value then make a small positive increment to internode weight. If they have opposite values then make a small negative decriment to the node

  47. Distance of NN state from 1st training pattern

  48. Phase diagram of attractor network Load: alpha=no patterns/no connections per node

  49. b Character recognition For a 256 x 256 character we have 65, 536 pixels. One input for each pixel is not efficient. Reasons: • Poor generalisation: data set would have to be vast to be able to properly constrain all the parameters. • Takes long time to train • Answer: use averages of N2 pixels dimensionality reduction – each average could be a feature.

  50. Restoration of memory: regular network

More Related