1 / 12

Segmentation

Segmentation. Course web page: vision.cis.udel.edu/~cv. May 5, 2003  Lecture 30. Announcements . Read Forsyth & Ponce Chapters 14.5 and 15.1 on graph-theoretic clustering and the Hough transform, respectively, for Wednesday HW 5 due Friday

gwyn
Download Presentation

Segmentation

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Segmentation Course web page: vision.cis.udel.edu/~cv May 5, 2003  Lecture 30

  2. Announcements • Read Forsyth & Ponce Chapters 14.5 and 15.1 on graph-theoretic clustering and the Hough transform, respectively, for Wednesday • HW 5 due Friday • Class is at 4 pm on Friday due to Honors Day activities

  3. Outline • Clustering basics • Methods • k-means clustering • Graph-theoretic clustering

  4. Basic Approaches to Clustering • Unknown number of clusters • Agglomerative clustering • Start with as many clusters as tokens and selectively merge • Divisive clustering • Start with one cluster for all tokens and selectively split • Known number of clusters • Selectively change cluster memberships of tokens • Merging/splitting/rearranging stops when threshold on token similarity is reached • Within cluster: As similar as possible • Between clusters: As dissimilar as possible

  5. Feature Space • Every token is identified by a set of salient visual characteristics called features (akin to gestalt grouping factors). For example: • Position • Color • Texture • Optical flow vector • Size, orientation (if token is larger than a pixel) • The choice of features and how they are quantified implies a feature space in which each token is represented by a point • Token similarity is thus measured by distance between points (aka “feature vectors”) in feature space

  6. k-means Clustering • Initialization: Given k categories, N points (in feature space). Pick k points randomly; these are initial cluster centers (means) ¹1, …,¹k. Repeat the following: • Assign all N points to clusters by nearest ¹i (make sure no cluster is empty) • Recompute mean ¹i of each cluster from Ci member points • If no mean has changed more than some ¢, stop • Effectively carries out gradient descent on

  7. Example: 3-means Clustering from Duda et al. Convergence in 3 steps

  8. Example: k-means Clustering from Forsyth & Ponce 4 of 11 clusters using color alone

  9. Clustering vs. Connected Components • Connected components: • Agglomerative • Feature space is position (adjacency) • Tokens are pixels that have passed a similarity test with respect to an absolute standard (e.g., brightness, redness, motion magnitude) • k-means • Known number of clusters (but can be estimated adaptively) • Feature space is general • Tokens are general. The similarity test guiding cluster membership is relative—i.e., only between the points themselves courtesy of HIPR Connected components example

  10. Example: k-means Clustering from Forsyth & Ponce 4 of 20 clusters using color and position

  11. k-means Clustering: Axis Scaling • Features of different types may have different scales • E.g., pixel position on a 640 x 480 image vs. RGB color in range [0, 1] for each channel • Problem: Features with bigger scales dominate Euclidean metric • Solutions: Manually weight features or use Mahalanobis distance in objective function • Requires guarantee of more points in each cluster

  12. Example: k-means Clustering for Tracking from B. Heisele et al.

More Related