1 / 37

Error Detection and Correction in SDS

Error Detection and Correction in SDS. Julia Hirschberg LSA 353. Today. Avoiding errors Detecting errors From the user side: what cues does the user provide to indicate an error? From the system side: how likely is it the system made an error?

howen
Download Presentation

Error Detection and Correction in SDS

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Error Detection and Correction in SDS Julia Hirschberg LSA 353

  2. Today • Avoiding errors • Detecting errors • From the user side: what cues does the user provide to indicate an error? • From the system side: how likely is it the system made an error? • Dealing with Errors: what can the system do when it thinks an error has occurred? • Identifying ‘problem’ dialogues

  3. Avoiding misunderstandings • The problem • By imitating human performance • Timing and grounding (Clark ’03) • Confirmation strategies • Clarification and repair subdialogues

  4. Learning from Human Behavior: Features in repetition corrections (KTH) 50 adults 40 children 30 Percentage of all repetitions 20 10 0 more shifting of increased clearly focus loudness articulated

  5. Learning from Human Behavior (Krahmer et al ’01) • Learning from human behavior • ‘go on’ and ‘go back’ signals in grounding situations (implicit/explicit verification) • Positive: short turns, unmarked word order, confirmation, answers, no corrections or repetitions, new info • Negative: long turns, marked word order, disconfirmation, no answer, corrections, repetitions, no new info

  6. Hypotheses supported but… • Can these cues be identified automatically? • How might they affect the design of SDS?

  7. Systems Have Trouble Knowing When They’ve Made a Mistake • Hard for humans to correct system misconceptions (Krahmer et al `99) User: I want to go to Boston. System: What day do you want to go to Baltimore? • Easier: answering explicit requests for confirmation or responding to ASR rejections System: Did you say you want to go to Baltimore? System: I'm sorry. I didn't understand you. Could you please repeat your utterance?

  8. But constant confirmation or over-cautious rejection lengthens dialogue and decreases user satisfaction

  9. …And Systems Have Trouble Recognizing User Corrections • Probability of recognition failures increases after a misrecognition(Levow ‘98) • Corrections of system errors often hyperarticulated (louder, slower, more internal pauses, exaggerated pronunciation)  more ASR error (Wade et al ‘92, Oviatt et al ‘96, Swerts & Ostendorf ‘97, Levow ‘98, Bell & Gustafson ‘99)

  10. Can Prosodic Information Help Systems Perform Better? • If errors occur where speaker turns are prosodically ‘marked’…. • Can we recognize turns that will be misrecognized by examining their prosody? • Can we modify our dialogue and recognition strategies to handle corrections more appropriately?

  11. Approach • Collect corpus from interactive voice response system • Identify speaker ‘turns’ • incorrectly recognized • where speakers first aware of error • that correct misrecognitions • Identify prosodic features of turns in each category and compare to other turns • Use Machine Learning techniques to train a classifier to make these distinctions automatically misrecognition aware site correction

  12. Turn Types TOOT: Hi. This is AT&T Amtrak Schedule System. This is TOOT. How may I help you? User: Hello. I would like trains from Philadelphia to New York leaving on Sunday at ten thirty in the evening. TOOT: Which city do you want to go to? User: New York. misrecognition correction aware site

  13. TOOT Dialogues • Collected to study effects of differences in dialogue strategy on user performance and satisfaction (Litman & Pan ‘99) • type of initiative (system, user, mixed) • type of confirmation (explicit, implicit, none) • adaptability condition • Subjects • 39 summer students • 16/23 (F/M) • 20/19 (native speaker/non)

  14. Platform: combined over-the-phone ASR and TTS (Kamm et al ‘97) with web access to train information • Task: find train information for 4 scenarios • Corpus for current study: • 2328 speaker turns • 52 dialogues • Misrecognitions • Overall word accuracy: 61% • Overall concept accuracy(CA): 71% • “I want to go to Boston from Philadelphia” (2 domain concepts) • recognized as “I want to go to Boston” (one concept) = 50%

  15. A Successful Dialogue S: Hi, this is AT&T Amtrak schedule system. This is TOOT. How may I help you? U: I would like a train to New York City from Philadelphia on Sunday at ten thirty P M S: I heard you say “go to New York City from Philadelphia on Sunday approximately at ten thirty p.m." Do you want me to find the trains from Philadelphia to New York City on Sunday approximately at ten thirty p. m. now? U: Yes. S: I am going to get the train schedule for you....

  16. Are Misrecognitions, Aware Turns, Corrections Measurably Different from Other Turns? • For each type of turn: • For each speaker, for each prosodic feature, calculate mean values for e.g. all correctly recognized speaker turns and for all incorrectly recognized turns • Perform paired t-tests on these speaker pairs of means (e.g., for each speaker, pairing mean values for correctly and incorrectly recognized turns)

  17. How: Prosodic Features Examined per Turn • Raw prosodic/acoustic features • f0 maximum and mean (pitch excursion/range) • rms maximum and mean (amplitude) • total duration • duration of preceding silence • amount of silence within turn • speaking rate (estimated from syllables of recognized string per second) • Normalized versions of each feature (compared to first turn in task, to previous turn in task, Z scores)

  18. Distinguishing Correct Recognitions from Misrecognitions (NAACL ‘00) • Misrecognitions differ prosodically from correct recognitions in • F0 maximum (higher) • RMS maximum (louder) • turn duration (longer) • preceding pause (longer) • slower • Effect holds up across speakers and even when hyperarticulated turns are excluded

  19. WER-Based Results Misrecognitions are higher in pitch, louder, longer, more preceding pause and less internal silence

  20. Predicting Turn Types Automatically • Ripper (Cohen ‘96) automatically induces rule sets for predicting turn types • greedy search guided by measure of information gain • input: vectors of feature values • output: ordered rules for predicting dependent variable and (X-validated) scores for each rule set • Independent variables: • all prosodic features, raw and normalized • experimental conditions (adaptability of system, initiative type, confirmation style, subject, task) • gender, native/non-native status • ASR recognized string, grammar, and acoustic confidence score

  21. ML Results: WER-defined Misrecognition

  22. Best Rule-Set for Predicting WER Using prosody, ASR conf, ASR string, ASR grammar if (conf <= -2.85 ^ (duration >= 1.27) ^ then F if (conf <= -4.34) then F if (tempo <= .81) then F If (conf <= -4.09 then F If (conf <= -2.46 ^ str contains “help” then F If conf <= -2.47 ^ ppau >= .77 ^ tempo <= .25 then F If str contains “nope” then F If dur >= 1.71 ^ tempo <= 1.76 then F else T

  23. Error Handling Strategies • If systems can recognize their lack of recognition, how should they inform the user that they don’t understand (Goldberg et al ’03)? • System rephrasing vs. repetitions vs. statement of not understanding • Apologies • What behaviors might these produce? • Hyperarticulation • User frustration • User repetition vs. rephrasing

  24. What lessons do we learn? • When users are frustrated they are generally harder to recognize accurately • When users are increasingly misrecognized they tend to be misrecognized more often and become increasingly frustrated • Apologies combined with rephrasing of system prompts tend to decrease frustration and improve WER: Don’t just repeat! • Users are better recognized when they rephrase their input

  25. How does an SDS Recognize a Correction? (ICSLP ’00) TOOT: Hi. This is AT&T Amtrak Schedule System. This is TOOT. How may I help you? User: Hello. I would like trains from Philadelphia to New York leaving on Sunday at ten thirty in the evening. TOOT: Which city do you want to go to? User: New York. correction

  26. Serious Problem for Spoken Dialogue Systems • 29% of turns in our corpus are corrections • 52% of corrections are hyperarticulated but only 12% of other turns • Corrections are misrecognized at least twice as often as non-corrections (60% vs. 31%) • But corrections are no more likely to be rejected than non-corrections…. (9% vs. 8%) • Are corrections also measurably distinct from non-corrections?

  27. Prosodic Indicators of Corrections • Corrections differ from other turns prosodically: longer, louder, higher in pitch excursion, longer preceding pause, less internal silence • ML results: • Baseline: 30% error • norm’d prosody +non-prosody: 18.45% +/- 0.78% • automatic: 21.48% +/- 0.68%

  28. Prosodic Indicators of Corrections • Corrections differ from other turns prosodically: longer, louder, higher in pitch excursion, longer preceding pause, less internal silence

  29. ML Rules for Correction Prediction Baseline: 30% error (predict not correction) norm’d prosody +non-prosody: 18.45% +/- 0.78% automatic: 21.48% +/- 0.68% • TRUE :- gram=universal, f0max>=0.96, dur>=6.55 • TRUE :- gram=universal, zeros>=0.57, asr<=-2.95 • TRUE :- gram=universal, f0max<=1.98, dur<=1.10, tempo>=1.21, zeros>=0.71 • TRUE :- dur>=0.76, asr<=-2.97, strat=UsrNoConf • TRUE :- dur>=2.28, ppau<=0.86 • TRUE :- rmsav>=1.11, strat=MixedImplicit, gram=cityname, f0max>=0.70 • default FALSE

  30. Corrections in Context • What about their form and content? • How do system behaviors affect the corrections users produce? • What sort of corrections are most, least effective? • When users correct the same mistake more than once, do they vary their strategy in productive ways?

  31. User Correction Behavior • Correction classes: • ‘omits’ and ‘repetitions’ lead to fewer misrecognitions than ‘adds’ and ‘paraphrases’ • Turns that correct rejections are more likely to be repetitions, while turns correcting misrecognitions are more likely to be omits

  32. Type of correction sensitive to strategy • People much more likely to repeat their misrecognized utterance in a system-initiative environment • People much more likely to correct by omitting information if there is no system confirmation than if there is explicit confirmation • People omit information more in MixedImplicit and UserNoConfirm conditions • “Restarts” unlikely to be recognized (77% misrecognized) and skewed in distribution: • 31% of corrections are “restarts” in MI and UNC

  33. None for SE, where initial turns well recognized • It doesn’t pay to start over!

  34. Recognizing `Problematic’ Dialogues • Hastie et al, “What’s the Trouble?” ACL 2002 • How to define a dialogue as problematic? • User satisfaction is low • Task is not completed • How to recognize? • Train on a corpus of recorded dialogues (1242 DARPA Communicator dialogues) • Predict • User Satisfaction • Task Completion (0,1,2)

  35. User Satisfaction features:

  36. Results

  37. Next Class • J&M 22.5 • Brennan ’96 • Roth ‘05

More Related