1 / 29

Feature Extraction and Matching Feature Tracking Sudipta N Sinha Sep 19, 2006

Feature Extraction and Matching Feature Tracking Sudipta N Sinha Sep 19, 2006. Outline. Feature Extraction and Matching (for Larger Motion) What are features ? Tasks Detection: finding the feature locations Representation: computing a compact descriptor

truong
Download Presentation

Feature Extraction and Matching Feature Tracking Sudipta N Sinha Sep 19, 2006

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Feature Extraction and MatchingFeature TrackingSudipta N SinhaSep 19, 2006

  2. Outline • Feature Extraction and Matching (for Larger Motion) • What are features ? • Tasks • Detection: finding the feature locations • Representation: computing a compact descriptor • Matching: Finding distances in feature space. • Algorithms: • Harris Corner Detector, SIFT. • More complex (wide-baseline correspondence) • Tracking (for Small Motion) • Track geometric primitives (points, lines, patches, objects …) from frame to frame in video. • High temporal coherence. • Typically required in a real-time system.

  3. Matching comes up in all kinds of problems in computer vision Panoramas, mosaics Structure from Motion ( F, T , … ) Object recognition More: Detect object in clutter, Motion segmentation, Image-based retrieval, Video mining .. (Check Papers in References)

  4. The Correspondence Problem and Invariance Invariance:Features need to be detected repeatedly at the same locations and the computed descriptors must be similar in-spite of the following type of changes observed in two images of the same scene.

  5. Point Features (Interest Points) Goal: • To detect the same point in each image independently Challenges: • Need repeatability in presence of Scale, Rotation, Affine distortions and Illumination change • Not all pixels are good candidates. • Texture-less regions, edges. • Effect of noise on feature extraction. • Examples: • Harris Corner Detector, SIFT

  6. Harris Corner Detector Idea: • Detect a patch which looks locally unique. • Shifting the patch in any direction will give a large change in intensity. Texture-less region: no change in all directions Edge: no change along one direction. Corner: large changes in all direction.

  7. A symmetric matrix represents an ellipse Matrix is symmetric semi-definite

  8. Harris Corner Detector Eigen-value analysisof the 2x2 matrix M:

  9. Corners:Feature Descriptors and Matching. • Simple Descriptor: convert a patch of n x n pixels centered at that pixel into a vector. • Matching: SAD, SSD, ZMNCC • Invariance: • Translation ? Yes • Rotation ? No. But the image patch could be re-sampled using eigen-vector pair as the local coordinate frame. • Scale and Affine ? No • Brightness Change ? Yes, normalize image intensity (ZMNCC) Feature point in high dim feature space

  10. Point Features: SIFT First: Scale Invariant Feature Detection, Later: SIFT descriptors (rotational invariance)

  11. The SIFT Algorithm (Lowe IJCV’04) Create Scale Space Stack : • Intensity • Gradient • DoG Images from SIFT Tutorial [Thomas F. El-Maraghi May 2004 ]

  12. The SIFT Algorithm • Find Local Extrema of DoG • in Scale Space. • Remove • Low Contrast Point • Points on Edges. Images from SIFT Tutorial [Thomas F. El-Maraghi May 2004 ]

  13. The SIFT Algorithm • Descriptor represents Local Patch Appearance. • Oriented Histograms built from Weighted Gradients. Images from Lowe IJCV’04

  14. SIFT: Results

  15. Wide Baseline Matching: Elliptical and Parallelogram features(Tuteylaar, Van Gool et. al. IJCV 2004) Anchor point: Traditional Corners

  16. Wide Baseline Matching: Elliptical and Parallelogram features(Tuteylaar, Van Gool et. al. IJCV 2004) Anchor point: local intensity maxima

  17. Tracking Corners – The KLT algorithm Main Idea: Assuming brightness constancy, try to find the new positions of some ‘salient’ image points in the second image (where the motion is small) Steps: • Detecting Salient Points to track (in current frame) • Track those features in next frame Could be done by Searching (Template matching) BUT KLT algorithm does this analytically, hence its faster !

  18. KLT equations: Assumption – Brightness Constancy Find a displacement d, such that the error given by the following equation is minimized (over a tracking window  )

  19. KLT equations: Assumption – Brightness Constancy Find a displacement d, such that the error given by the following equation is minimized (over a tracking window  )

  20. KLT equations: A symmetric form was later proposed by Tomasi, as follows To estimate d, differentiate w.r.t d,

  21. KLT equations: Substituting Taylor Series Expansion for J(.) and I(.) We get, Setting derivative to zero at the minima, and re-arranging, we get a linear system of equations for d

  22. KLT equations

  23. Multiscale and Iterative KLT Build Image Pyramid Coarse to Fine Tracking Increases Effective Spatial Range within which features can be tracked. View Dependent Effects : If surface patch is small, then large persective distortions can be approximated by an affine transformation Brightness change = gain + offset (2 more parameters) Affine KLT Invariance to illumination

  24. Acknowledgments Slides/Figures were taken from – • SIFT MATLAB tutorial - [Thomas F. El-Maraghi May 2004] • Lecture Notes by Bill Freeman • Lecture Notes on Tracking: UWA Computer Science, CITS 4240. • David Lowe’s SIFT papers • Stan Birchfield’s article on Symmetric Version of KLT equations.

  25. References and Papers • Stan Birchfield.KLT: An Implementation of the Kanade-Lucas-Tomasi Feature Tracker • [2] Bruce D. Lucas and Takeo Kanade. An Iterative Image Registration Technique with an Application to Stereo Vision.International Joint Conference on Artificial Intelligence, pages 674-679, 1981. • David Lowe, ‘Distinctive image features from scale-invariant keypoints’, Int. Journal of Computer Vision, 60(2):91–110, 2004. • J. Matas, O. Chum, U. Martin, and T. Pajdla. Robust wide baseline stereo from maximally stable extremal regions. In Proc. British Machine Vision Conference, volume 1, pages 384–393, Sep 2002. • K. Mikolajczyk and C. Schmid. Scale and affine invariant interest point detectors. Int. Journal of Computer Vision, 1(60):63–86, 2004 • T.Tuytelaars and L. Van Gool. Matching widely separated views based on affine invariant regions. Int. Journal of Computer Vision, 1(59):61–85, 2004 • K. Mikolajczyk, T. Tuytelaars, C. Schmid, A. Zisserman, J. Matas, F. Schaffalitzky, T. Kadir, and L. Van Gool. A comparison of affine region detectors. Technical Report, accepted to IJCV, 2005 • KLT src code: http://www.ces.clemson.edu/~stb/klt/ • SIFT Matlab code: see Link at http://robots.stanford.edu/cs223b04/project9.html

More Related