1 / 1

Optimizing Average Precision (Ranking) Incorporating High -Order Information

Learning to Rank using High-Order Information. Puneet K. Dokania 1 , A. Behl 2 , C. V. Jawahar 2 , M. Pawan Kumar 1. 1 Ecole Centrale Paris and INRIA Saclay - France, 2 IIIT Hyderabad - India. Aim. Results. HOB-SVM. Optimizing Average Precision (Ranking)

Download Presentation

Optimizing Average Precision (Ranking) Incorporating High -Order Information

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Learning to Rank using High-Order Information Puneet K. Dokania1, A. Behl2, C. V. Jawahar2, M. PawanKumar1 1Ecole Centrale Paris and INRIA Saclay - France, 2IIIT Hyderabad - India Aim Results HOB-SVM • Optimizing Average Precision (Ranking) • Incorporating High-Order Information • Incorporate High-order information • Optimizes Decomposable loss • Action Classification • Problem Formulation: Given an image and a bounding box in the image, predict the action being performed in the bounding box. • Dataset- PASCAL VOC 2011,10 action classes,, 4846 images(2424 ‘trainval’ + 2422 ‘test’ images). • Features: POSELET + GIST • High-Order Information: “Persons in the same are likely to perform same action”. Connected bounding boxes belonging to the same image. • Encoding high-order information (joint feature map): Motivations and Challenges • For example, persons in the same image are likely to have same action • High-Order Information • Results • Parameter Learning: High Order + Ranking -> No Method • Average Precision Optimization • AP is the most commonly used evaluation metric • AP loss depends on the ranking of the samples • Optimizing 0-1 loss may lead to suboptimal AP Action inside the bounding box ? Use Max-marginals Context helps Ranking ?? Single Score • Ranking: • Sort difference of max-marginal scores to get ranking: AP = 1 Accuracy = 1 • Max-marginals capture high-order information AP = 0.55 Accuracy = 1 • Optimization: Notations • Use dynamic graph cut for fast computation of max-marginals • Convex AP doesn’t decompose • Samples: • Labels: • Set of positive samples: HOAP-SVM • Set of negative samples: • Ranking Matrix: • Optimizes AP based loss • Incorporate high-order information SVM • Encode ranking and high-order information (AP-SVM + HOB-SVM): • Paired ttest: AP-SVM • HOB-SVM better than SVM in 6 action classes • HOB-SVM not better than AP-SVM • HOAP-SVM better than SVM in 6 action classes • HOAP-SVM better than AP-SVM in 4 actions classes • Loss function: HOB-SVM AP-SVM Sample scores similar to HOB-SVM (max-marginals) Joint score similar to AP-SVM HOAP-SVM • Optimizes AP (measure of ranking) Conclusions • Parameter Learning • Key Idea: Uses SSVM to encode ranking (joint score): • No High-Order Information • Ranking: Sort scores • Parameter Learning • Ranking: Sort scores, • Optimization • Non Convex - > Difference of Convex -> CCCP • Optimization • Convex • Cutting plane -> Most violated constraint (greedy) -> O(|P||N|) Code and Data: http://cvn.ecp.fr/projects/ranking-highorder/ • Dynamic graph cut for fast upper bound

More Related