650 likes | 825 Views
Learning with Ambiguously Labeled Training Data. Kshitij Judah Ph.D. student Advisor: Prof. Alan Fern Qualifier Oral Presentation. Outline. Introduction and Motivation Problem Definition Learning with Partial Labels Semi-Supervised Learning Multiple-Instance Learning Conclusion.
E N D
Learning with Ambiguously Labeled Training Data Kshitij Judah Ph.D. student Advisor: Prof. Alan Fern Qualifier Oral Presentation EECS, OSU
Outline • Introduction and Motivation • Problem Definition • Learning with Partial Labels • Semi-Supervised Learning • Multiple-Instance Learning • Conclusion EECS, OSU
Introduction and Motivation • Supervised Learning Test Point Training Data (D) Learning Algorithm f Loss Function (From CS534 Machine Learning slides) EECS, OSU
Introduction and Motivation • Obtaining labeled training data is difficult • Reasons: • Require substantial human effort (Information Retrieval) • Require expensive tests (Medical Diagnosis, Remote Sensing) • Disagreement among experts (Information Retrieval, Remote Sensing) EECS, OSU
Introduction and Motivation • Reasons: • Specifying a single correct class is not possible but pointing incorrect classes is possible (Medical Diagnosis) • Labeling is not possible at instance level (Drug Activity Prediction) • Objective: Present the space of learning problems and algorithms that deal with ambiguously labeled training data EECS, OSU
Space of Learning Problems Learning Problems Direct Supervision (Labels) Indirect Supervision (No Labels) Probabilistic Labels Deterministic Labels Reinforcement Learning Learning with Partial Labels Supervised Learning Semi-Supervised Learning Session-based Learning Unsupervised Learning Multiple-instance Learning EECS, OSU
Problem Definition denote an instance space, • Let being domain of the ith feature • Instance is a vector denotes a discrete valued class variable with domain the set of all classes • Training data is a set of examples and is a vector of length m such that kth entry is 1 if is a possible class of , otherwise 0 • In case of unambiguously labeled , exactly one entry in is 1 EECS, OSU
Problem Definition • In case of ambiguous labeling, more than one entries can be 1 • The learning task is to select using D an optimal hypothesis from a hypothesis space • is used to classify a new instance • Optimality refers to expected classification accuracy EECS, OSU
Learning with Partial Labels EECS, OSU
Problem Definition • Each training instance has a set of candidate class labels associated with it • is assumed to contain true label of • Different from multiple-label learning because each instance belongs to exactly one class • Applications: Medical diagnosis, Bioinformatics, information retrieval. EECS, OSU
Taxonomy of Algorithms for Learning with Partial Labels • Probabilistic Generative Approaches • Mixture Models + EM • Probabilistic Discriminative Approaches • Logistic Regression for Partial Labels EECS, OSU
Probabilistic Generative Approaches • Learn joint probability distribution • can be used to either model data or perform classification • learned using and • Classification of is performed using Bayes rule: and assigning to class that maximizes EECS, OSU
Probabilistic Generative Approaches • Some examples of generative models are • Gaussian Distribution • Mixture Models e.g. Gaussian mixture model • Directed Graphical Models e.g. Naïve Bayes, HMMs • Undirected Graphical Models e.g. Markov Networks • Applications: Robotics, speech recognition, computer vision, forecasting, time series prediction etc. EECS, OSU
Mixture Models • In a mixture model, the probability density function of is given by: • Model parameters are often learned using maximum likelihood estimation (MLE) EECS, OSU
Mixture Models • In MLE approach, assuming instances in D are independently and identically distributed (i.i.d), the joint density of D is given by • The goal is to find parameters that maximizes the log of above quantity EECS, OSU
( ) 6 Expectation-Maximization Algorithm (EM) • Difficult to optimize - log of sums • Incomplete-data likelihood function – Generating component unknown • Convert to complete-data likelihood by introducing a random variable which tells, for each , which component generated it • If we observe , EECS, OSU
Expectation-Maximization Algorithm (EM) • We do not however observe directly, it is a hidden variable • Use EM to handle hidden variables • EM is an iterative algorithm with two basic steps: • E-Step: Given current estimate of model parameters and data D, the expected value is calculated where expectation is to the marginal distribution of (i.e. each ): EECS, OSU
Expectation-Maximization Algorithm (EM) • M-Step: In this, we maximize (8) computed in E-step: • Repeat until convergence. EECS, OSU
Mixture Models + EM for Partial Labels • - set of possible mixture components that could have generated the sample • EM is modified so that is restricted to take values from • Modified E-Step if EECS, OSU
Pros and Cons of Generative Models • Pros • Clear probabilistic framework • Effective when model assumptions are correct • Cons • Complex, lot of parameters • Difficult to specify and verify their correctness • Unlabeled data can hurt performance if model is incorrect • Parameter estimation is difficult • EM gets stuck in local maxima, computationally expensive • No guarantee of better classification EECS, OSU
Discriminative Approaches • Discriminative approaches attempt to directly learn a mapping required for classification • Simpler than generative models • More focused towards classification or regression tasks • Two types: • Probabilistic: They learn posterior class probability • Non-probabilistic: They learn a classifier • Applications - text classification, information retrieval, machine perception, time series prediction, bioinformatics etc. EECS, OSU
Logistic Regression(From CS534 Machine Learning slides) • Learns the conditional class distribution • For two class case {0, 1}, the probabilities are given as: • For 0-1 loss function LR is a linear classifier • For multiple class case, we learn weight parameters for each class EECS, OSU
Maximum Entropy Logistic Regression • Learns such that it has maximum entropy while being consistent with the partial labels in the D • is close to uniform distribution over classes in candidate set and zero everywhere else • Presence of unlabeled data results in with low discrimination capacity • Disadvantages: • Does not use unlabeled data to enhance learning • Instead uses unlabeled data in unfavorable way that decreases discrimination capacity of EECS, OSU
Minimum Commitment Logistic Regression • Learns such that classes belonging to candidate set are predicted with high probability • No preference is made among distributions that satisfy above property • Presence of unlabeled data has no effect on discrimination capacity of • Disadvantages : • Does not use unlabeled data to enhance learning EECS, OSU
Self-Consistent Logistic Regression • Learns such that the entropy of the distribution is minimized • Learning is driven by the predictions that are produced by the learner itself • Advantage: • Can make use of unlabeled data to enhance learning • Disadvantage: • If initial estimate is incorrect, the algorithm will go down the wrong path EECS, OSU
Semi-Supervised Learning EECS, OSU
Problem Definition • Semi-supervised learning (SSL) is a special case of learning with partial labels • , and • For each in , = true class label of • For each in , = • Usually because unlabeled data is easily available as compared to labeled data e.g. information retrieval • SSL approaches makes use of in conjunction with to enhance learning • SSL has been applied to many domains like text classification, image classification, audio classification etc. EECS, OSU
Taxonomy of Algorithms for Semi-Supervised Learning • Probabilistic Generative Approaches • Mixture Models + EM • Discriminative Approaches • Semi-supervised support vector machines (S3VMs) • Self Training • Co-Training EECS, OSU
Mixture Models + EM for SSL • Nigam et al. proposed a probabilistic generative framework for document classification • Assumptions: (1) Documents are generated by a mixture model (2) One-to-one correspondence between mixture components and classes • Probability of a document is given as: • was modeled using Naïve Bayes EECS, OSU
Mixture Models + EM for SSL • is estimated using MLE by maximizing log likelihood of : where • EM is used to maximize • Applications: Text classification, remote sensing, face orientation discrimination, pose estimation, audio classification EECS, OSU
Mixture Models + EM for SSL • Problem: Unlabeled data decreased performance • Reason: Incorrect modeling assumption • One mixture component per class: Class includes multiple sub-topics • Solutions: • Decrease effect of unlabeled data • Correct modeling assumptions: Multiple mixture components per class EECS, OSU
margin Class +1 Class -1 SVMs EECS, OSU
SVMs • Idea: Do following things: A. Maximize margin B. Classify points correctly C. Keep data points outside margin • SVM optimization problem: EECS, OSU
SVMs • A. is achieved by • B. and C. are achieved using hinge loss • C controls trade off between training error and generalization error From SSL tutorial by Xiaojin Zhu during ICML 2007 EECS, OSU
S3VMs • Also known as Transductive SVMs From SSL tutorial by Xiaojin Zhu during ICML 2007 EECS, OSU
S3VMs • Idea: Do following things: A. Maximize margin B. correctly classify labeled data C. Keep data points (labeled and unlabeled) outside margin • S3VM optimization problem: EECS, OSU
S3VMs • A. is achieved by • B. is achieved using hinge loss • C. is achieved for labeled data using hinge loss and for unlabeled data using hat loss From SSL tutorial by Xiaojin Zhu during ICML 2007 EECS, OSU
S3VMs • Pros • Same mathematical foundations as SVMs and hence applicable to tasks where SVMs are applicable • Cons • Optimization is difficult, finding exact solution is NP-Hard • Applications: text classification, Bioinformatics, biological entity recognition, image retrieval etc. EECS, OSU
Self Training • A leaner uses its own predictions to teach itself • Self training algorithm: • Train a classifier using • Use h to predict labels of • Add most confidently labeled instances in to • Repeat until a stopping criterion is met EECS, OSU
Self Training • Pros • One of the simplest SSL algorithms • Can be used with any classifier • Cons • Errors in initial predictions can build up and affect subsequent learning. • A possible solution: identify and remove mislabeled examples from self-labeled training set during each iteration. E.g. SETRED algorithm [Ming Li and Zhi-Hua Zhou] • Applications: object detection in images, word sense disambiguation, semantic role labeling, named entity recognition EECS, OSU
Co-Training • Idea: Two classifiers and collaborate with each other during learning • and use disjoint subsets and of feature set • and each is assumed to be: • Sufficient for the classification task at hand • Conditionally independent of the other given class EECS, OSU
Co-Training • Example: Web page classification • can be text present on the web page • can be anchor text on pages that link to this page • The two feature sets are sufficient for classification and conditionally independent given class of web page because each is generated by two different persons that know the class EECS, OSU
Co-Training • Co-Training algorithm: • Use and to train ; and to train • Use to predict labels of • Use to predict labels of • Add h1‘s most confidently labeled instances to • Add h2‘s most confidently labeled instances to • Repeat until a stopping criterion is met EECS, OSU
Co-Training • Pros • Effective when required feature split exists • Can be used with any classifier • Cons • What if feature split does not exist? Answer: (1) Use random feature split (2) Use full feature set • How to select confidently labeled instances? Answer: Use multiple classifiers instead of two and do majority voting. E.g. tri-training uses three classifiers [Zhou and Li], democratic co-learning uses multiple classifiers [Zhou and Goldman] • Applications: Web page classification, named entity recognition, semantic role labeling EECS, OSU
Multiple-Instance Learning EECS, OSU
Problem Definition • Set of objects • Each object has multiple variants called instances • , is called a bag • Bag is positive if at least one instance is positive else negative • Goal: learn EECS, OSU
Why MIL is ambiguous label problem? • To learn , learn • maps an instance to a class • does not contain labels at instance level EECS, OSU
Taxonomy of Algorithms for Multiple-Instance Learning • Probabilistic Generative Approaches • Diverse Density Algorithm (DD) • EM Diverse Density (EM-DD) • Probabilistic Discriminative Approaches • Multiple-instance logistic regression • Non-probabilistic Discriminative Approaches • Axis-Parallel Rectangles (APR) • Multiple-instance SVM (MISVM) EECS, OSU
x x 2 1 4 4 3 1 3 Point A 3 1 3 4 3 3 3 3 3 1 3 2 2 2 1 2 2 2 3 4 2 2 2 1 2 Point B 1 2 3 2 2 2 4 1 1 3 1 2 1 4 2 1 1 2 2 4 2 2 2 4 Diverse Density Algorithm (DD) EECS, OSU
Diverse Density Algorithm (DD) • Diverse density is a probabilistic quantity: • Using gradient ascent, maximum diverse density point can be located • Cons: Computationally expensive • Applications: person identification, stock selection, natural scene classification EECS, OSU