1 / 56

776 Computer Vision

776 Computer Vision. Jan-Michael Frahm Spring 2012. Image alignment. Image from http://graphics.cs.cmu.edu/courses/15-463/2010_fall/. A look into the past. http://blog.flickr.net/en/2010/01/27/a-look-into-the-past/. A look into the past. http://komen-dant.livejournal.com/345684.html.

aderes
Download Presentation

776 Computer Vision

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. 776 Computer Vision Jan-Michael Frahm Spring 2012

  2. Image alignment Image from http://graphics.cs.cmu.edu/courses/15-463/2010_fall/

  3. A look into the past http://blog.flickr.net/en/2010/01/27/a-look-into-the-past/

  4. A look into the past http://komen-dant.livejournal.com/345684.html Leningrad during the blockade

  5. Bing streetside images http://www.bing.com/community/blogs/maps/archive/2010/01/12/new-bing-maps-application-streetside-photos.aspx

  6. Image alignment: Applications Panorama stitching Recognitionof objectinstances

  7. Image alignment: Challenges Small degree of overlap Intensity changes Occlusion,clutter

  8. Image alignment • Two families of approaches: • Direct (pixel-based) alignment • Search for alignment where most pixels agree • Feature-based alignment • Search for alignment where extracted features agree • Can be verified using pixel-based alignment

  9. Alignment as fitting • Previous lectures: fitting a model to features in one image M Find model M that minimizes xi

  10. Alignment as fitting xi xi ' T • Previous lectures: fitting a model to features in one image • Alignment: fitting a model to a transformation between pairs of features (matches) in two images M Find model M that minimizes xi Find transformation Tthat minimizes

  11. 2D transformation models • Similarity(translation, scale, rotation) • Affine • Projective(homography)

  12. Let’s start with affine transformations • Simple fitting procedure (linear least squares) • Approximates viewpoint changes for roughly planar objects and roughly orthographic cameras • Can be used to initialize fitting for more complex models

  13. Fitting an affine transformation • Assume we know the correspondences, how do we get the transformation?

  14. Fitting an affine transformation • Linear system with six unknowns • Each match gives us two linearly independent equations: need at least three to solve for the transformation parameters

  15. Fitting a plane projective transformation • Homography: plane projective transformation (transformation taking a quad to another arbitrary quad)

  16. Homography • The transformation between two views of a planar surface • The transformation between images from two cameras that share the same center

  17. Application: Panorama stitching Source: Hartley & Zisserman

  18. Fitting a homography • Recall: homogeneous coordinates Converting fromhomogeneousimage coordinates Converting tohomogeneousimage coordinates

  19. Fitting a homography • Recall: homogeneous coordinates • Equation for homography: Converting from homogeneousimage coordinates Converting to homogeneousimage coordinates

  20. Fitting a homography • Equation for homography: 3 equations, only 2 linearly independent

  21. Direct linear transform • H has 8 degrees of freedom (9 parameters, but scale is arbitrary) • One match gives us two linearly independent equations • Four matches needed for a minimal solution (null space of 8x9 matrix) • More than four: homogeneous least squares

  22. Robust feature-based alignment • So far, we’ve assumed that we are given a set of “ground-truth” correspondences between the two images we want to align • What if we don’t know the correspondences?

  23. Robust feature-based alignment ? • So far, we’ve assumed that we are given a set of “ground-truth” correspondences between the two images we want to align • What if we don’t know the correspondences?

  24. Robust feature-based alignment

  25. Robust feature-based alignment • Extract features

  26. Robust feature-based alignment • Extract features • Compute putative matches

  27. Robust feature-based alignment • Extract features • Compute putative matches • Loop: • Hypothesize transformation T

  28. Robust feature-based alignment • Extract features • Compute putative matches • Loop: • Hypothesize transformation T • Verify transformation (search for other matches consistent with T)

  29. Robust feature-based alignment • Extract features • Compute putative matches • Loop: • Hypothesize transformation T • Verify transformation (search for other matches consistent with T)

  30. Generating putative correspondences ?

  31. Generating putative correspondences ( ) ( ) • Need to compare feature descriptors of local patches surrounding interest points ? ? = featuredescriptor featuredescriptor

  32. Feature descriptors Compute appearancedescriptors Detect regions Normalize regions • Recall: covariant detectors => invariant descriptors

  33. Feature descriptors • Simplest descriptor: vector of raw intensity values • How to compare two such vectors? • Sum of squared differences (SSD) • Not invariant to intensity change • Normalized correlation • Invariant to affine intensity change

  34. Disadvantage of intensity vectors as descriptors • Small deformations can affect the matching score a lot

  35. Feature descriptors: SIFT • Descriptor computation: • Divide patch into 4x4 sub-patches • Compute histogram of gradient orientations (8 reference angles) inside each sub-patch • Resulting descriptor: 4x4x8 = 128 dimensions David G. Lowe. "Distinctive image features from scale-invariant keypoints.”IJCV 60 (2), pp. 91-110, 2004.

  36. Feature descriptors: SIFT • Descriptor computation: • Divide patch into 4x4 sub-patches • Compute histogram of gradient orientations (8 reference angles) inside each sub-patch • Resulting descriptor: 4x4x8 = 128 dimensions • Advantage over raw vectors of pixel values • Gradients less sensitive to illumination change • Pooling of gradients over the sub-patches achieves robustness to small shifts, but still preserves some spatial information David G. Lowe. "Distinctive image features from scale-invariant keypoints.”IJCV 60 (2), pp. 91-110, 2004.

  37. Feature matching • Generating putative matches: for each patch in one image, find a short list of patches in the other image that could match it based solely on appearance ?

  38. Feature space outlier rejection • How can we tell which putative matches are more reliable? • Heuristic: compare distance of nearest neighbor to that of second nearest neighbor • Ratio of closest distance to second-closest distance will be highfor features that are not distinctive • Threshold of 0.8 provides good separation David G. Lowe. "Distinctive image features from scale-invariant keypoints.”IJCV 60 (2), pp. 91-110, 2004.

  39. Reading David G. Lowe. "Distinctive image features from scale-invariant keypoints.”IJCV 60 (2), pp. 91-110, 2004.

  40. RANSAC • The set of putative matches contains a very high percentage of outliers • RANSAC loop: • Randomly select a seed group of matches • Compute transformation from seed group • Find inliers to this transformation • If the number of inliers is sufficiently large, re-compute least-squares estimate of transformation on all of the inliers • Keep the transformation with the largest number of inliers

  41. RANSAC example: Translation Putative matches

  42. RANSAC example: Translation Select one match, count inliers

  43. RANSAC example: Translation Select one match, count inliers

  44. RANSAC example: Translation Select translation with the most inliers

  45. Scalability: Alignment to large databases Test image ? Model database • What if we need to align a test image with thousands or millions of images in a model database? • Efficient putative match generation • Approximate descriptor similarity search, inverted indices

  46. Scalability: Alignment to large databases Test image D. Nistér and H. Stewénius, Scalable Recognition with a Vocabulary Tree, CVPR 2006 Vocabulary tree with inverted index Database • What if we need to align a test image with thousands or millions of images in a model database? • Efficient putative match generation • Fast nearest neighbor search, inverted indexes

  47. Descriptor space Slide credit: D. Nister

  48. Hierarchical partitioning of descriptor space (vocabulary tree) Slide credit: D. Nister

  49. Vocabulary tree/inverted index Slide credit: D. Nister

  50. Populating the vocabulary tree/inverted index Model images Slide credit: D. Nister

More Related