290 likes | 657 Views
Real-Time Template Tracking. Stefan Holzer Computer Aided Medical Procedures (CAMP), Technische Universität München , Germany. Real-Time Template Tracking Motivation. Object detection is comparably slow detect them once and then track them
E N D
Real-Time Template Tracking Stefan Holzer Computer Aided Medical Procedures (CAMP), TechnischeUniversitätMünchen, Germany
Real-Time Template TrackingMotivation • Objectdetectioniscomparablyslowdetectthemonceandthentrackthem • Roboticapplicationsoftenrequire a lotofstepstheless time wespend on objectdetection/tracking • themore time wecanspend on otherthingsor • thefasterthewholetaskfinishes
Real-Time Template TrackingOverview Real-Time Template Tracking Feature-basedTracking Intensity-based Tracking Analytic Approaches Learning-based Approaches … LK, IC, ESM, … JD, ALPs, …
Intensity-based Template TrackingGoal Find parametersof a warpingfunction such that: for all templatepoints
Intensity-based Template TrackingGoal Find parametersof a warpingfunction such that: for all templatepoints Reformulatethegoalusing a predictionasapproximationof : • Find theparameters‘ increment such that • byminimizing Thisis a non-linear minimizationproblem.
Intensity-based Template TrackingLukas-Kanade UsestheGauss-Newton methodforminimization: • Applies a first-order Taylor seriesapproximation • Minimizesiteratively B. D. Lucas and T. Kanade. An iterative image registration technique with an application to stereo vision, 1981.
Lukas-KanadeApproximation bylinearization First-order Taylor seriesapproximation: wheretheJacobianmatrixis: gradientimages Jacobianofthecurrentimage Jacobianofthewarp
Lukas-KanadeIterative minimization The followingsteps will beiterativelyapplied: • Minimize a sumofsquareddifferences wheretheparameterincrementhas a closed form solution(Gauss-Newton) • Updatetheparameterapproximation Thisisrepeateduntilconvergenceisreached.
Lukas-KanadeIllustration • Ateachiteration: • Warp • Compute • Update templateimage currentframe
Lukas-KanadeImprovements • Inverse Compositional (IC):reduce time per iteration • Efficient Second-Order Minimization (ESM):improveconvergence • Approach ofJurie & Dhome (JD):reduce time per iterationandimproveconvergence • Adaptive Linear Predictors (ALPs):learnandadapttemplate online
Inverse CompositionalOverview Differencestothe Lukas-Kanadealgorithm • Reformulationofthegoal: Jacobianofthetemplateimage Jacobianofthewarp
Inverse CompositionalOverview Differencestothe Lukas-Kanadealgorithm • Reformulationofthegoalandcanbeprecomputedonlyneedstobecomputedateachiteration • Parameter update changes S. Baker and I. Matthews. Equivalence and efficiency of image alignment algorithms, 2001.
Efficient Second-Order MinimizationShort Overview • Usessecond-order Taylor approximationofthecostfunctionLessiterationsneededtoconvergeLarger areaofconvergenceAvoidinglocalminimaclosetothe global • Jacobianneedstobecomputedateachiteration S. Benhimane and E. Malis. Real-time image-based tracking of planes using efficient second-order minimization, 2004.
Jurie & DhomeOverview • Motivation: • Computing theJacobian in everyiterationis expensive • Goodconvergencepropertiesaredesired • Approach of JD:pre-learnrelationbetweenimagedifferencesandparameter update: • relationcanbeseenas linear predictor • isfixedfor all iterations • learningenablesto jump overlocalminima F. Jurie and M. Dhome. Hyperplane approximation for template matching. 2002
Jurie & DhomeTemplate and Parameter Description • Template consistsof sample points • Distributed overthetemplateregion • Usedto sample theimagedata • Deformation isdescribedusingthe 4 cornerpointsofthetemplate • Image valuesarenormalized (zeromean, unitstandarddeviation) • Error vectorspecifiesthedifferencesbetweenthenormalizedimagevalues
Jurie & DhomeLearning phase • Applysetofrandomtransformationstoinitialtemplate • Computecorrespondingimagedifferences • Stacktogetherthetrainingdata ( ) = ) ( =
Jurie & DhomeLearning phase • Applysetofrandomtransformationstoinitialtemplate • Computecorrespondingimagedifferences • Stacktogetherthetrainingdata • The linear predictorshouldrelatethesematricesby: • So, the linear predictorcanbelearnedas
Jurie & DhomeTracking phase • Warp sample pointsaccordingtocurrentparameters • Usewarped sample pointstoextractimagevaluesandtocomputeerrorvector • Compute update using • Updatecurrentparameters • Improvetrackingaccuracy: • Apply multiple iterations • Use multiple predictorstrainedfor different amoutsofmotions
Jurie & DhomeLimitations • Learning large templatesis expensivenot possible online • Shape oftemplatecannotbeadaptedtemplatehastoberelearned after eachmodification • Tracking accuracyis inferior to LK, IC, …use JD asinitializationforoneofthose
Adaptive Linear PredictorsMotivation • Distributelearningof large templatesoverseveralframeswhiletracking • Adaptthetemplateshapewithregardto • suitabletexture in thesceneand • thecurrentfieldofview S. Holzer, S. Ilic and N. Navab. Adaptive Linear Predictors for Real-Time Tracking, 2010.
Adaptive Linear PredictorsSubsets • Sample pointsaregroupedintosubsetsof 4 pointsonlysubsetsareaddedorremoved • Normalizationisappliedlocally, not globallyeachsubsetisnormalizedusingitsneighboringsubsetsnecessarysince global meanandstandarddeviationchangeswhentemplateshapechanges
Adaptive Linear PredictorsTemplate Extension Goal: Extend initial template by an extension template • Standard approach for single templates: • Standard approach for combined template:
Adaptive Linear PredictorsTemplate Extension Goal: Extend initial template by an extension template • The inverse matrix can be represented as: • Using the formulas of Henderson and Searle leads to: • Only inversion of one small matrix necessary since is known from the initial template
Adaptive Linear PredictorsAddingnewsamples • Number of random transformations must be greater or atleast equal to the number of sample points Presented approach is limited by the number of transformations used for initial learning • Restriction can be overcome by adding newtraining data on-the-fly This can be accomplished in real-time using the Sherman-Morrison formula:
Thankyouforyourattention! Questions?