1 / 60

Get Another Label? Improving Data Quality and Machine Learning Using Multiple, Noisy Labelers

Get Another Label? Improving Data Quality and Machine Learning Using Multiple, Noisy Labelers. Panos Ipeirotis New York University. Joint work with Jing Wang, Foster Provost, and Victor Sheng. Outsourcing machine learning preprocessing.

roy
Download Presentation

Get Another Label? Improving Data Quality and Machine Learning Using Multiple, Noisy Labelers

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Get Another Label?Improving Data Quality and Machine Learning Using Multiple, Noisy Labelers PanosIpeirotis New York University Joint work with Jing Wang, Foster Provost, and Victor Sheng

  2. Outsourcing machine learning preprocessing Traditionally, modeling teams have invested substantial internal resources in data formulation, information extraction, cleaning, and other preprocessing Now, we can outsource preprocessing tasks, such as labeling, feature extraction, verifying information extraction, etc. using Mechanical Turk, oDesk, etc. quality may be lower than expert labeling (much?) but low costs can allow massive scale 6

  3. Example: Build an “Adult Web Site” Classifier • Need a large number of hand-labeled sites • Get people to look at sites and classify them as: G (general audience)PG (parental guidance) R(restricted)X (porn) • Cost/Speed Statistics • Undergrad intern: 200 websites/hr, cost: $15/hr • MTurk: 2500 websites/hr, cost: $12/hr

  4. Noisy labels can be problematic Many tasks rely on high-quality labels for objects: webpage classification for safe advertising learning predictive models searching for relevant information finding duplicate database records image recognition/labeling song categorization sentiment analysis Noisy labels can lead to degraded task performance 9

  5. Quality and Classification Performance Labeling quality increases  classification quality increases P = 100% P = 80% P = 60% P = 50% P: Single-labeler quality (probability of assigning correctly a binary label)

  6. Solutions • Get better labelers • Often beyond our control or too expensive • Get more labelers per item • Our focus

  7. Majority Voting and Label Quality • Ask multiple labelers, keep majority label as “true” label • Quality is probability of being correct P=1.0 P=0.9 P=0.8 P is probabilityof individual labelerbeing correct P=0.7 P=0.6 P=0.5 P=0.4

  8. Tradeoffs for Modeling Get more examples  Improve classification Get more labels  Improve label quality  Improve classification P = 1.0 P = 0.8 P = 0.6 P = 0.5 13

  9. Basic Labeling Strategies • Single Labeling • Get as many data points as possible • One label each • Round-robin Repeated Labeling • Fixed Round Robin (FRR) • keep labeling the same set of points in some order • Generalized Round Robin (GRR) • repeatedly label data points, giving next label to the one with the fewest so far

  10. Fixed Round Robin vs. Single Labeling Single FRR (50 examples) p= 0.8, labeling quality #examples =50 With low noise and few examples, more (single labeled) examples better

  11. Fixed Round Robin vs. Single Labeling FRR (100 examples) SL p= 0.6, labeling quality #examples =100 With high noise or many examples, repeated labeling better than single labeling

  12. Selective Repeated-Labeling • We have seen so far: • With enough examples and noisy labels, getting multiple labels is better than single-labeling • When we consider extra cost for getting the unlabeled part the benefit is magnified • Can we do better than the basic strategies? • Key observation: we have additional information to guide selection of data for repeated labeling the current multiset of labels • Example: {+,-,+,-,-,+} vs. {+,+,+,+,+,+}

  13. Natural Candidate: Entropy • Entropy is a natural measure of label uncertainty: • E({+,+,+,+,+,+})=0 • E({+,-, +,-, -,+ })=1 Strategy: Get more labels for high-entropy label multisets

  14. What Not to Do: Use Entropy Entropy: Improves at first, hurts in long run

  15. Why not Entropy • In the presence of noise, entropy will be high even with many labels • Entropy is scale invariant {3+, 2-} has same entropy as {600+ , 400-}, i.e., 0.97

  16. Estimating Label Uncertainty (LU) • Observe +’s and –’s and compute Pr{+|obs} and Pr{-|obs} • Use Bayesian estimate of Bernoulli: Beta prior + update • Label uncertainty = tail of beta distribution Beta probability density function SLU 0.5 0.0 1.0

  17. Label Uncertainty • p=0.7 • 5 labels (3+, 2-) • Entropy ~ 0.97 • CDFb=0.34

  18. Label Uncertainty • p=0.7 • 10 labels (7+, 3-) • Entropy ~ 0.88 • CDFb=0.11

  19. Label Uncertainty • p=0.7 • 20 labels (14+, 6-) • Entropy ~ 0.88 • CDFb=0.04

  20. Label Uncertainty vs. Round Robin similar results across a dozen data sets 25 Remember: GRR already better than single labeling

  21. More sophisticated label uncertainty • Observe +’s and –’s and compute Pr{+|obs} and Pr{-|obs} • Estimated Beta distribution = quality of labelers for the example Estimate the prior Pr(+) by iterating

  22. More sophisticated LU improves labeling quality under class imbalance and fixes some pesky LU learning curve glitches Both techniques perform essentially optimally with balanced classes

  23. Examples Models + + - - - - - - - - + + + + - - - - Another strategy:Model Uncertainty (MU) - - - - + + + + + + + + + - - - - - - - - + + - - - - + + - - - - + + • Learning models of the data provides an alternative source of information about label certainty • (a random forest for the results to come) • Model uncertainty: get more labels for instances that cause model uncertainty • Intuition? • for modeling: why improve training data quality if model already is certain there? • for data quality, low-certainty “regions” may be due to incorrect labeling of corresponding instances Self-healing process [Brodley et al, 99]

  24. Yet another strategy:Label & Model Uncertainty (LMU) • Label and model uncertainty (LMU): avoid examples where either strategy is certain

  25. 1 0.95 0.9 0.85 0.8 Labeling quality 0.75 0.7 UNF MU 0.65 LU LMU 0.6 0 400 800 1200 1600 2000 Number of labels (waveform, p=0.6) Label Quality Label + Model Uncertainty Label Uncertainty Model Uncertainty alone also improves quality Uniform, round robin

  26. Across 12 domains, LMU is always better than GRR. LMU is statistically significantly better than LU and MU. Model Quality Label & Model Uncertainty 33

  27. + + - - - - - - - - Why does Model Uncertainty (MU) work? + + + + - - - - - - - - + + + + + + + + + - - - - - - - - + + - - - - + + - - - - + + • MU score distributions for correctly labeled (blue) and incorrectly labeled (purple) cases

  28. Self-healing process Examples Models + + - - - - - - - - Why does Model Uncertainty (MU) work? + + + + - - - - - - - - + + + + + + + + + - - - - - - - - + + - - - - + + - - - - + + Self-healing MU “active learning” MU

  29. Adult content classification

  30. Example: Build an “Adult Web Site” Classifier • Need a large number of hand-labeled sites • Get people to look at sites and classify them as: G (general audience)PG (parental guidance) R(restricted)X (porn) • Cost/Speed Statistics • Undergrad intern: 200 websites/hr, cost: $15/hr • MTurk: 2500 websites/hr, cost: $12/hr

  31. Bad news: Spammers! • Worker ATAMRO447HWJQ • labeled X (porn) sites as G (general audience)

  32. Redundant votes, infer quality Look at our spammer friend ATAMRO447HWJQ together with other 9 workers • Using redundancy, we can compute error rates for each worker

  33. Repeated Labeling, EM, and Confusion Matrices (Dawid & Skene, 1979) Iterative process to estimate worker error rates Initialize“correct” label for each object (e.g., use majority vote) Estimate error rates for workers (using “correct” labels) Estimate “correct” labels (using error rates, weight worker votes according to quality) Go to Step 2 and iterate until convergence Error rates for ATAMRO447HWJQ P[G→ G]=99.947% P[G → X]=0.053% P[X→ G]=99.153% P[X → X]=0.847% Our friend ATAMRO447HWJQmarked almost all sites as G.Seems like a spammer…

  34. Challenge: From Confusion Matrixes to Quality Scores The algorithm generates “confusion matrixes” for workers • Error rates for ATAMRO447HWJQ • P[X → X]=0.847% P[X → G]=99.153% • P[G → X]=0.053% P[G → G]=99.947% How to check if a worker is a spammer using the confusion matrix? (hint: error rate not enough)

  35. Challenge 1: Spammers are lazy and smart! • Confusion matrix for spammer • P[X → X]=0% P[X → G]=100% • P[G → X]=0% P[G → G]=100% • Confusion matrix for good worker • P[X → X]=80% P[X → G]=20% • P[G → X]=20% P[G → G]=80% • Spammers figure out how to fly under the radar… • In reality, we have 85% G sites and 15% X sites • Errors of spammer = 0% * 85% + 100% * 15% = 15% • Error rate of good worker = 85% * 20% + 85% * 20% = 20% False negatives: Spam workers pass as legitimate

  36. Challenge 2: Humans are biased! • Error rates for CEO of AdSafe • P[G → G]=20.0% P[G → P]=80.0% P[G → R]=0.0% P[G → X]=0.0% • P[P → G]=0.0% P[P → P]=0.0%P[P → R]=100.0% P[P → X]=0.0% • P[R → G]=0.0% P[R → P]=0.0% P[R → R]=100.0% P[R → X]=0.0% • P[X → G]=0.0% P[X → P]=0.0% P[X → R]=0.0% P[X → X]=100.0% • In reality, we have 85% G sites, 5% P sites, 5% R sites, 5% X sites • Errors of spammer (all in G) = 0% * 85% + 100% * 15% = 15% • Error rate of biased worker = 80% * 85% + 100% * 5% = 73% • False positives: Legitimate workers appear to be spammers

  37. Solution: Reverse errors first, compute error rate afterwards • Error Rates for biased worker • P[G → G]=20.0% P[G → P]=80.0% P[G → R]=0.0% P[G → X]=0.0% • P[P → G]=0.0% P[P → P]=0.0%P[P → R]=100.0% P[P → X]=0.0% • P[R → G]=0.0% P[R → P]=0.0% P[R → R]=100.0% P[R → X]=0.0% • P[X → G]=0.0% P[X → P]=0.0% P[X → R]=0.0% P[X → X]=100.0% • When biased worker says G, it is 100% G • When biased worker says P, it is 100% G • When biased worker says R, it is 50% P, 50% R • When biased worker says X, it is 100% X Small ambiguity for “R-rated” votes but other than that, fine!

  38. Solution: Reverse errors first, compute error rate afterwards • Error Rates for spammer: ATAMRO447HWJQ • P[G → G]=100.0%P[G → P]=0.0% P[G → R]=0.0% P[G → X]=0.0% • P[P → G]=100.0% P[P → P]=0.0% P[P → R]=0.0% P[P → X]=0.0% • P[R → G]=100.0% P[R → P]=0.0% P[R → R]=0.0% P[R → X]=0.0% • P[X → G]=100.0% P[X → P]=0.0% P[X → R]=0.0% P[X → X]=0.0% • When spammer says G, it is 25% G, 25% P, 25% R, 25% X • When spammer says P, it is 25% G, 25% P, 25% R, 25% X • When spammer says R, it is 25% G, 25% P, 25% R, 25% X • When spammer says X, it is 25% G, 25% P, 25% R, 25% X [note: assume equal priors] The results are highly ambiguous. No information provided!

  39. Computing Quality Scores • Cost of “soft” label <p1, p2, …., pL > with cost cijfor labeling as j an object of class i • “Soft” labels with probability mass in a single class are good • “Soft” labels with probability mass spread across classes are bad • Cost (spammer) = Replace pi with Priori Quality = 1 – Cost(worker) / Cost(spammer)

  40. Experimental Results • 500 web pages in G, P, R, X • 100 workers per page (just to evaluate effect of more labels) • 339 workers • Lots of noise! • 95% accuracy with majority vote (only!) • Dropped all workers with quality score 50% or below • Error rate: 1% of labels dropped • Quality scores: 30% of labels dropped, accuracy 99.8% • Note massive amount of redundancy and very conservative spam rejection

  41. Too much theory? Open source implementation available at: http://code.google.com/p/get-another-label/ • Input: • Labels from Mechanical Turk • Cost of incorrect labelings(e.g., XG costlier than GX) • Output: • Corrected labels • Worker error rates • Ranking of workers according to their quality • Beta version, more improvements to come! • Suggestions and collaborations welcomed!

  42. Workers reacting to quality scores Score-based feedback leads to strange interactions: The angry, has-been-burnt-too-many-times worker: • “F*** YOU! I am doing everything correctly and you know it! Stop trying to reject me with your stupid ‘scores’!” The overachiever: • “What am I doing wrong?? My score is 92% and I want to have 100%”

  43. An unexpected connection at theNAS “Frontiers of Science” conf. Your spammers behave like my mice!

More Related