1 / 21

AMCS/CS229: Machine Learning

Clustering 2. AMCS/CS229: Machine Learning. Xiangliang Zhang King Abdullah University of Science and Technology. Cluster Analysis. Partitioning Methods + EM algorithm Hierarchical Methods Density-Based Methods Clustering quality evaluation How to decide the number of clusters ?

ninon
Download Presentation

AMCS/CS229: Machine Learning

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Clustering 2 AMCS/CS229: Machine Learning Xiangliang Zhang King Abdullah University of Science and Technology

  2. Cluster Analysis Partitioning Methods + EM algorithm Hierarchical Methods Density-Based Methods Clustering quality evaluation How to decide the number of clusters ? Summary 2 Xiangliang Zhang, KAUST AMCS/CS229: Machine Learning

  3. For supervised classification we have a variety of measures to evaluate how good our model is Accuracy, precision, recall For cluster analysis, the analogous question is how to evaluate the “goodness” of the resulting clusters? But “clusters are in the eye of the beholder”! Then why do we want to evaluate them? To avoid finding patterns in noise To compare clustering algorithms To compare two sets of clusters To compare two clusters The quality of Clustering 3 Xiangliang Zhang, KAUST AMCS/CS229: Machine Learning

  4. Numerical measures that are applied to judge various aspects of cluster validity, are classified into the following two types: External Index: Used to measure the extent to which cluster labels match externally supplied class labels. Purity, Normalized Mutual Information Internal Index: Used to measure the goodness of a clustering structure without respect to external information. Sum of Squared Error (SSE) Cophenetic correlation coefficient, silhouette Measures of Cluster Validity 4 http://nlp.stanford.edu/IR-book/html/htmledition/evaluation-of-clustering-1.html

  5. Cluster Validity: External Index • The class labels are externally supplied (q classes) • Purity: • Larger purity values indicate better clustering solutions. • Purity of each cluster Cr of size nr • Purity of the entire clustering 5

  6. Cluster Validity: External Index • Purity: 6

  7. Cluster Validity: External Index • The class labels are externally supplied (q classes) • NMI (Normalized Mutual Information) : where I is mutual information and H is entropy 7

  8. Cluster Validity: External Index • NMI (Normalized Mutual Information) : • Larger NMI values indicate better clustering solutions. 8

  9. Internal Measures: SSE • Internal Index: Used to measure the goodness of a clustering structure without respect to external information • SSE is good for comparing two clustering results • average SSE • SSE curves w.r.t. various K • Can also be used to estimate the number of clusters 9

  10. Internal Measures: Cophenetic correlation coefficient • Cophenetic correlation coefficient: • a measure of how faithfully a dendrogram preserves the pairwise distances between the original data points. • Compare two hierarchical clusterings of the data 0.71 A B Compute the correlation coefficient between Dist and CP 2.50 1.41 C 1.00 E 0.5 D F 10 Matlab functions: cophenet

  11. Cluster Analysis Partitioning Methods + EM algorithm Hierarchical Methods Density-Based Methods Clustering quality evaluation How to decide the number of clusters ? Summary 11 Xiangliang Zhang, KAUST AMCS/CS229: Machine Learning

  12. Cluster cohesion measures how closely related are objects in a cluster = SSE or the sum of the weight of all links within a cluster. Cluster separation measures how distinct or well-separated a cluster is from other clusters = sum of the weights between nodes in the cluster and nodes outside the cluster. Internal Measures: Cohesion and Separation separation cohesion 12

  13. Silhouette Coefficient combines ideas of both cohesion and separation For an individual point, i Calculate a = average distance of i to the points in its cluster Calculate b = min (average distance of i to points in another cluster) The silhouette coefficient for a point is then given by Typically between 0 and 1. The closer to 1 the better. Can calculate the Average Silhouette width for a cluster or a clustering Internal Measures: Silhouette Coefficient Matlab functions: silhouette 13

  14. compare different clusterings by the average silhouette values Determine number of clusters by Silhouette Coefficient K=3 mean(silh) = 0.526 K=4 mean(silh) = 0.640 K=5 mean(silh) = 0.527

  15. Select the number K of clusters as the one maximizing averaged silhouette value of all points Optimizing an objective criterion Gap statistics of the decreasing of SSE w.r.t. K Model-based method: optimizing a global criterion (e.g. the maximum likelihood of data) Try to use clustering methods which need not to set K, e.g., DbScan, Prior knowledge….. Determine the number of clusters 15 Xiangliang Zhang, KAUST AMCS/CS229: Machine Learning

  16. Cluster Analysis Partitioning Methods + EM algorithm Hierarchical Methods Density-Based Methods Clustering quality evaluation How to decide the number of clusters ? Summary 16 Xiangliang Zhang, KAUST AMCS/CS229: Machine Learning

  17. Clustering VS Classification 17 Xiangliang Zhang, KAUST AMCS/CS229: Machine Learning

  18. Considerable progress has been made in scalable clustering methods Partitioning: k-means, k-medoids, CLARANS Hierarchical: BIRCH, ROCK, CHAMELEON Density-based: DBSCAN, OPTICS, DenClue Grid-based: STING, WaveCluster, CLIQUE Model-based: EM, SOM Spectral clustering Affinity Propagation Frequent pattern-based: Bi-clustering, pCluster Current clustering techniques do not address all the requirements adequately, still an active area of research Problems and Challenges 18 Xiangliang Zhang, KAUST AMCS/CS229: Machine Learning

  19. Cluster Analysis Open issues in clustering Clustering quality evaluation How to decide the number of clusters ? 19

  20. What you should know • What is clustering? • How does k-means work? • What is the difference between k-means and k-mediods? • What is EM algorithm? How does it work? • What is the relationship between k-means and EM? • How to define inter-cluster similarity in Hierarchical clustering? What kinds of options do you have ? • How does DBSCAN work ? 20 Xiangliang Zhang, KAUST AMCS/CS229: Machine Learning

  21. What you should know • What are the advantages and disadvantages of DbScan? • How to evaluate the clustering results ? • Usually how to decide the number of clusters ? • What are the main differences between clustering and classification? 21 Xiangliang Zhang, KAUST AMCS/CS229: Machine Learning

More Related