1 / 11

-Artificial Neural Network- Chapter 9 Self Organization Map(SOM)

-Artificial Neural Network- Chapter 9 Self Organization Map(SOM). 朝陽科技大學 資訊管理系 李麗華 教授. Introduction. It’s proposed by Kohonen in 1980. SOM is an unsupervised two layered network that can organize a topological map from a random starting point.

gay
Download Presentation

-Artificial Neural Network- Chapter 9 Self Organization Map(SOM)

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. -Artificial Neural Network- Chapter 9 Self Organization Map(SOM) 朝陽科技大學 資訊管理系 李麗華 教授

  2. Introduction • It’s proposed by Kohonen in 1980. • SOM is an unsupervised two layered network that can organize a topological map from a random starting point. SOM is also called Kohnen’s self organizating feature map. The resulting map shows the natural relationships among the patterns that are given to the network. • The application is good for clustering analysis.

  3. k Yk1 Ykj Yk2 Ykj Yjk (Xij,Yij) j Y11 Y12 Y1k W1jk Wijk X2 X1 Network Structure • One input layer • One competitive layer which is usually a 2-Dim grid Input layer:f(x):x Output layer:competitive layer with topological map relationship Weights :randomly assigned

  4. Concept of Neighborhood Center:the winning node C is the center. Distance: R Factor(鄰近係數): RFj:f( rj,R)=e(- rj /R) e(-rj/R)→ 1 when rj= ∮ e(-rj/R)→ ∮ when rj= ∞ e(-rj/R)→ 0.368 when rj = R The longer the distance, the smaller the neighborhood area. R Factor Adjustment:Rn=R-rate  Rn-1 , R-rate<1.0 • N is the node is output N to C • rj is the distance from N to C • R:the radius of the neighborhood • rj :distance from N to C

  5. Learning • Setup network • Randomly assign weights to W • Set the coordinate value of the output layer N(x,y) • Input a training vector X • Compute the winning node • Update weight △W with R factor • ηn=η-rate ηn-1 Rn=R-rate ηn-1 • Repeat from 4 to 7 until converge

  6. Reuse the network • Setup the network • Read the weight matrix • Set the coordriate value of the output layer N(x,y) • Read input vector • Compute the winning node • Output the clustering result Y.

  7. 1 j=j*&k=k* if φ others Computation process 1. Setup network X1~Xn(Input Vector) Njk (Output) 2. Compute winning node 3. Yjh =

  8. △Wijk=η(Xi - Wijk ).RFjk When rjk= Wijk=△Wijk+Wijk 6. ηn=η-rate‧ηn-1 Rn= R-rate‧Rn-1 Computation process (cont.) 4. Update Weights 5. Repeat 1-4 for all input 7. Repeat until converge

  9. 1 7 8 2 3 5 10 -1 1 4 6 9 -1 1 Example • Let there be one 2-Dim clustering problem. • The vector space include 5 different clusters and each has 2 sample sets.

  10. RFjk= net22 1 Wi10 Wi02 Wi20 Wi11 -1 1 Wi12 Wi01 Wi22 Wi21 Wi00 -1 Solution Setup an 2X9 network Randomly assign weights Let R=2.0 η=1.0 代入第一個pattern[-0.9, -0.8] • net00=[(-0.9+0.2)2+(-0.8+0.8) 2]=0.49 • net01=[(-0.9-0.2) 2+(-0.8+0.4) 2]=1.37 • net02=[(-0.9+0.3) 2+(-0.8-0.6) 2]=2.32 • net10=[(-0.9+0.4) 2+(-0.8-0.6) 2]=1.71 • net11=[(-0.9+0.3) 2+(-0.8-0.2) 2]=1.36 • net12=[(-0.9+0.6) 2+(-0.8+0.2) 2]=0.45 • net20=[(-0.9+0.7) 2+(-0.8-0.2) 2]=1.04 • net21=[(-0.9-0.8) 2+(-0.8+0.6) 2]=2.93 • net22=[(-0.9+0.8) 2+(-0.8+0.6) 2]=0.05 【MIN】 Winning Node

  11. -0.17 1 Wi10 Wi02 Wi20 Wi11 -1 1 Wi12 Wi01 Wi22 Wi21 Wi00 -1 Solution (cont.) Update weight j*=2 k*=2 r00= RF00=0.243 RF01= r01= r02 = RF22= 1 r22= W00=η‧(X1-W00)‧ RF00 1.0 × (-0.9+0.2)×(0.243)=-0.17

More Related