1 / 16

Neural Networks

Neural Networks. Dr. Peter Phillips. The Human Brain (Recap of week 1). W 1. W 2. f( S j ). S j. Output. W 3. A Classic Artificial Neuron (Recap cont.). X 1. X 2. X 3. Unsupervised Learning. Today’s lecture will consider the use of Self Organising Map (SOM) and Unsupervised Learning

rae
Download Presentation

Neural Networks

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Neural Networks Dr. Peter Phillips

  2. The Human Brain (Recap of week 1)

  3. W1 W2 f(Sj) Sj Output W3 A Classic Artificial Neuron (Recap cont.) X1 X2 X3

  4. Unsupervised Learning • Today’s lecture will consider the use of Self Organising Map (SOM) and Unsupervised Learning • Recall that Supervised Learning matches inputs to outputs. • Unsupervised Learning Classifies the data into classes

  5. The Biological Basis for Unsupervised Neural Networks • Major sensory and motor systems are ‘topographically mapped’ in the brain • Vision: retinotopic map • Hearing: tonotopic map • Touch: somatotopic map

  6. Kohonen Self-Organising Maps • The most famous unsupervised learning network is the Kohonen Network. • Neural network algorithm using unsupervised competitive learning • Primarily used for organization and visualization of complex data Teuvo Kohonen

  7. Understanding the Data Set • A good understanding of the data set is essential to use a SOM – or any network for that matter • A ‘distance measure’ and/or suitable rescaling must be defined to allow meaningful comparison • The data must be of good quality and must be representative of the application area

  8. j 2d array of neurons wjn wj1 wj3 wj2 Weighted synapses xn x1 x2 x3 ... Set of input signals (connected to all neurons in lattice) SOM - Architecture

  9. Euclidean distance between two vectors a and b, a = (a1,a2,…,an), b = (b1,b2,…bn), is calculated as: i.e. Pythagoras’ Theorem Other distance measures could be used, e.g. Manhattan distance Finding a Winner (2) Euclidean distance

  10. SOM Parameters • The learning rate and neighbourhood function define to what extent the weights of each node are adjusted

  11. Degree of neighbourhood Time Distance from winner Neighbourhood function Degree of neighbourhood Distance from winner Time

  12. Data For Tutorial Work • Data collected from a UHT plant at Leatherhead • Consists of 300 cases use 150 for Training and 150 for Testing • Data collected with plant running in normal state, during cleaning of exchangers and with fault

  13. Tutorial 2 (UHT Plant Data)

  14. My settings First run 70 epochs – learning rate 0.6 to 0.1 Second run 50 epochs – learning rate constant 0.1 First run Neighbourhood kept to 1 Second run Neighbourhood start 1 end 0

  15. Trajan Classification

  16. SOM – New Data • A trained SOM can be used to classify new input data • The input data is classified to the node with the ‘best’ or ‘closest’ weights • Previous knowledge of other data samples assigned to the same class enable inferences to be made about the ‘new’ input sample

More Related