1 / 1

Approximate Correspondences in High Dimensions Kristen Grauman 1,2 and Trevor Darrell 1

Number of matches in bin i,j. Number of new matches for j th bin at i th level. In time, approximate the optimal partial matching cost: use multi-resolution histograms to count matches that are possible within a discrete set of distances. diameter of cell i,j.

siran
Download Presentation

Approximate Correspondences in High Dimensions Kristen Grauman 1,2 and Trevor Darrell 1

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Number of matches in bin i,j Number of new matches for jth bin at ith level In time, approximate the optimal partial matching cost: use multi-resolution histograms to count matches that are possible within a discrete set of distances. diameter of cell i,j Vocabulary-guided bins Uniform bins Pyramid matching method 6.1e-4/6.2e-4 1.5e-3 / 5.7e-4 Time/match (s) (d=128/d=10) 99.0 / 97.7 64.9 / 96.5 Mean recognition rate/class (d=128/d=10) (Caltech-4 data set, Harris and MSER-detected SIFT features) Uniform bins Vocabulary-guided bins • VG pyramid structure stored once in • Histograms stored sparsely in entries • Inserting point sets into histograms adds time • Match time still only set of features → histogram pyramid Approximate Correspondences in High Dimensions Kristen Grauman1,2 and Trevor Darrell1 1CSAIL, Massachusetts Institute of Technology 2Department of Computer Sciences, University of Texas-Austin Results Problem The correspondence between sets of local feature vectors is often a good measure of similarity, but it is computationally expensive. VG pyramids’ matching scores consistently highly correlated with the optimal matching, even for high dimensional features. (ETH-80 image data, SIFT features, k=10, L=5, results from 10 runs) flakes snow cool ice ski cold No explicit search for matches! Accuracy of existing matching approximations declines linearly with the feature dimension. The Vocabulary-Guided Pyramid Match Our approach Data-dependent pyramid structure allows more gradual distance ranges. • Form multi-resolution decomposition of the feature space to efficiently count “implicit” matches without directly comparing features • Exploit structure in feature space when placing partitions in order to fully leverage their grouping power • Approximate partial matching • Linear-time match • Mercer kernel • Accurate for feature dimensions > 100 Uniformly shaped bins result in decreased matching accuracy for high-dimensional features… Tune pyramid partitions to the feature distribution Explicit correspondence fields are more accurate and faster to compute. • Hierarchical k-means over corpus of features • Record diameters of the irregularly shaped cells Optimal partial match Vocabulary-guided (VG) pyramid match cost: time Number of matches in bin i,j’s children The Pyramid Match [Grauman and Darrell, ICCV 2005] Improved object recognition when used as a kernel in an SVM. Weighting options: input-specific upper bound admits a Mercer kernel Pyramid match cost: Future work • Learning weights on pyramid bins • Beyond geometric vocabularies • Sub-linear time PM hashing (ongoing) • Distortion bounds for the VG-PM? Number of new matches at level i counted by difference in histogram intersections across levels Weight according to bin size

More Related