300 likes | 500 Views
Using Conversational Word Bursts in Spoken Term Detection. Justin Chiu Language Technologies Institute Presented at University of Cambridge September 6 2013. Introduction. Spoken Term Detection (STD): detecting word or phrase targets in conversational speech Queries: Text Target: Audio
E N D
Using Conversational Word Bursts in Spoken Term Detection Justin Chiu Language Technologies Institute Presented at University of Cambridge September 6 2013
Introduction • Spoken Term Detection (STD): detecting word or phrase targets in conversational speech • Queries: Text • Target: Audio • The BABEL program: rapid development of key word search for novel languages, with limited resources • This research: using (ideally universal) properties of conversation to enhance performance
Current Approach • ASR produces candidates (as a confusion network) • Search on the ASR result identifies targets • Single-word query: Posterior probability for the query word in confusion network • Multi-word query: Product of the posterior probability for each word in confusion network • Deciding detection result with search result probability/threshold (YES/NO decision)
Challenge for Current Approach • Sensitive to ASR performance • High-performance ASR requires extensive training resources; limited resources produce low-accuracy • In the BABEL LimitedLP condition, only 10 hour training data are provided
Intuition • Can we use other sources of information to enhance STD performance even with poor ASR results? • We specifically focus on conversational structure, since we believe it is language-independent
Word Burst • Word burst refers to a phenomenon in conversational speech in which particular content words tend to occur in close proximity to each other, as a byproduct of some topic under discussion • Content Word: Word that do not occur with high frequency
Word Burst Occurrence pattern for the Tagalog word “magkano”
Word Burst Occurrence pattern for the Tagalog word “magkano” No Word Burst
Word Burst rescoring on the Confusion Network • Give Bonus to the posterior probability for the content word that is in aWord Burst • Penalize the posterior probability for the content word that is not in a Word Burst • Goal: Improve the quality for the confusion network for better search performance
Word Burst Rescoring Parameters • Window Size: time region for deciding Word Burst • Stop word %: top X% of words in training data; assumed not to be content words • Penalty for non-burst word: penalty for words that do not appear in a Word Burst • Trained from language pack (development) data
Datasets • 4 Language, 2 different size of training data (80 hour, 10 hour) • An additional 10 hours of data in each language for 5 fold cross-validation
ATWV • Actual Term Weighted Value • In BABEL, the C/V is 0.1, and the is 10-4 • Having a Correct Detection is much more valuable than reducing false alarm
Comparison between 10/80 hours of training data • In the condition with limited data, the introduction of additional information can compensate for the lack of training data
Tuned LimitedLP(10h) performance • Rescoring significantly reduces the False Alarm from detection result, indicating that penalizing an isolated content word is beneficial to STD performance
Best Parameters for LimitedLP(10h) • Apart from Turkish, parameter values are similar across languages. • Words do not repeat in the same form in Turkish, an agglutinative language.
Analysis/Discussion • The impact of Word Burst on WER? • How often does a Word Burst occur? • Issues in Turkish/Vietnamese? • Difference between Word Burst and Cache-base/Trigger-based language model?
WER • Word Burst does not provide significant improvement on WER • The rescoring helps the candidate’s probability to cross the detection threshold
How often do Word Bursts occur? • Query/Content word burst percentage (10sec) • More than 35% of Content/Query words occur as part of a Word Burst, indicating it’s generality • Query/Content word burst percentages are similar, except for Turkish
Issues in Turkish/Vietnamese • Turkish word will re-occur in different form, can not count on Word Burst for the same word • Using tool for morphological normalization might be able to find Word Burst for the same word • Vietnamese performance is in the same range as Cantonese • Some of the evaluation queries are syllables, not words, which Burst does not sounds reasonable
How Word Burst differs from Cache/Trigger Language Models • It not only considers the top 1 hypothesis, it considers the entire confusion set • It examines both past and future; Cache/Trigger based language model focus on the past • It focuses on time-based information, instead of token based information
Conclusion • Our experiments indicate that the Word Burst is a property of conversational speech and is language-independent phenomenon • Information from language or conversational structure (such as Word Burst) can be used to improve performance in the STD task, particularly when language resources are otherwise limited. • It is potentially useful in other applications
Special Thanks Alex Rudnicky Alan Black Yajie Miao Yuchen Zhang FlorianMetze