1 / 48

The PASCAL Recognizing Textual Entailment Challenges - RTE-1,2,3

The PASCAL Recognizing Textual Entailment Challenges - RTE-1,2,3. Ido Dagan Bar-Ilan University, Israel with …. Recognizing Textual Entailment PASCAL NOE Challenge 2004-5. Ido Dagan, Oren glickman Bar-Ilan University, Israel Bernardo Magnini ITC-irst, Trento, Italy.

december
Download Presentation

The PASCAL Recognizing Textual Entailment Challenges - RTE-1,2,3

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. The PASCALRecognizing Textual Entailment Challenges - RTE-1,2,3 Ido Dagan Bar-Ilan University, Israel with …

  2. Recognizing Textual EntailmentPASCAL NOE Challenge2004-5 Ido Dagan, Oren glickman Bar-Ilan University, Israel Bernardo Magnini ITC-irst, Trento, Italy

  3. The Second PASCAL Recognising Textual Entailment Challenge Roy Bar-Haim, Ido Dagan, Bill Dolan, Lisa Ferro, Danilo Giampicollo, Bernardo Magnini, Idan Szpektor Bar-Ilan, CELCT, ITC-irst, Microsoft Research, MITRE

  4. The Third Recognising Textual Entailment Challenge Danilo Giampiccolo (CELCT) and Bernardo Magnini (FBK-ITC) With Ido Dagan (Bar-Ilan) and Bill Dolan (Microsoft Research) Patrick Pantel (USC-ISI), for Resources Pool Hoa Dang and Ellen Voorhees (NIST), for Extended Task

  5. RTE Motivation • Text applications require semantic inference • A common framework for addressing applied inference as a whole is needed, but still missing • Global inference is typically application dependent • Application-independent approaches and resources exist for some semantic sub-problems • Textual entailment may provide such common application-independent semantic framework

  6. Framework Desiderata A framework for modeling a target level of language processing should provide: • Generic module for applications • A common underlying task, unified interface (cf. parsing) • Unified paradigm for investigating sub-phenomena

  7. Outline • The textual entailment task – what and why? • Evaluation dataset & methodology • Participating systems and approaches • Potential for machine learning • Framework for investigating semantics

  8. Variability Ambiguity Natural Language and Meaning Meaning Language

  9. Variability of Semantic Expression The Dow Jones Industrial Average closed up 255 Model variabilityas relations between text expressions: • Equivalence: text1  text2 (paraphrasing) • Entailment: text1  text2 – the general case Dow ends up Dow gains 255 points Stock market hits a record high Dow climbs 255

  10. Typical Application Inference QuestionExpected answer formWhoboughtOverture? >> XboughtOverture Overture’s acquisitionby Yahoo Yahoo bought Overture entails hypothesized answer text • Similar for IE: X buy Y • “Semantic” IR: t: Overture was bought … • Summarization (multi-document) – identify redundant info • MT evaluation (and recent ideas for MT) • Educational applications, …

  11. KRAQ'05 Workshop - KNOWLEDGE and REASONING for ANSWERING QUESTIONS (IJCAI-05) CFP: • Reasoning aspects:    * information fusion,    * search criteria expansion models     * summarization and intensional answers,    * reasoning under uncertainty or with incomplete knowledge, • Knowledge representation and integration:    * levels of knowledge involved (e.g. ontologies, domain knowledge),    * knowledge extraction models and techniques to optimize response accuracy… but similar needs for other applications – can entailment provide a common empirical task?

  12. Classical Entailment Definition • Chierchia & McConnell-Ginet (2001):A text t entails a hypothesis h if h is true in every circumstance (possible world) in which t is true • Strict entailment - doesn't account for some uncertainty allowed in applications

  13. “Almost certain” Entailments t:The technological triumph known as GPS … was incubated in the mind of Ivan Getting. h: Ivan Getting invented the GPS.

  14. Applied Textual Entailment • Directional relation between two text fragments: Text (t) and Hypothesis (h): • Operational (applied) definition: • Human gold standard - as in NLP applications • Assuming common background knowledge – which is indeed expected from applications

  15. Evaluation Dataset

  16. Generic Dataset by Application Use • 7 application settings in RTE-1, 4 in RTE-2/3 • QA • IE • “Semantic” IR • Comparable documents / multi-doc summarization • MT evaluation • Reading comprehension • Paraphrase acquisition • Most data created from actual applications output • ~800 examples in development and test sets • 50-50% YES/NO split

  17. Some Examples

  18. Final Dataset (RTE-2) • Average pairwise inter-judge agreement: 89.2% • Average Kappa 0.78 – substantial agreement • Better than RTE-1 • Removed 18.2% of pairs due to disagreement (3-4 judges) • Disagreement example: • (t) Women are under-represented at all political levels ...(h) Women are poorlyrepresented in parliament. • Additional review removed 25.5% of pairs • too difficult / vague / redundant

  19. Final Dataset (RTE-3) • Each pair judged by three annotators • Pairs on which the annotators disagreed were filtered-out. • Average pairwise annotator agreement: 87.8% (Kappa level of 0.75) • Filtered-out pairs: • 19.2 % due to disagreement • 9.4 % as controversial, too difficult, or too similar to other pairs

  20. Progress from 1 to 3 • More realistic application data: • RTE-1: some partly synthetic examples • RTE-2&3 mostly: • Input from common benchmarks for the different applications • Output from real systems • Test entailment potential across applications • Text length: • RTE-1&2: one-two sentences • RTE-3: 25% full paragraphs, requires discourse modeling/anaphora • Improve data collection and annotation • Revised and expanded guidelines • Most pairs triply annotated, some across organizers sites • Provide linguistic pre-processing, RTE Resources Pool • RTE-3 pilot task by NIST: 3-way judgments; explanations

  21. Suggested Perspective RE the Arthur Bernstein competition: “… Competition, even a piano competition, is legitimate … as long as it is just an anecdotal side effect of the musical culture scene, and doesn’t threat to overtake the center stage” Haaretz Israeli News Paper, Culture Section, April 1st, 2005

  22. Participating Systems

  23. Participation • Popular challenges, world wide: • RTE-1 – 17 groups • RTE-2 – 23 groups • RTE-3 – 26 groups • 14 Europe, 12 US • 11 newcomers (~40 groups so far) • 79 dev-set downloads (44 planned, 26 maybe) • 42 test-set downloads • Joint ACL-07/PASCAL workshop (~70 participants)

  24. Methods and Approaches • Estimate similarity match between t and h (coverage of h by t): • Lexical overlap (unigram, N-gram, subsequence) • Lexical substitution (WordNet, statistical) • Lexical-syntactic variations (“paraphrases”) • Syntactic matching/edit-distance/transformations • Semantic role labeling and matching • Global similarity parameters (e.g. negation, modality) • Anaphora resolution • Probabilistic tree-transformations • Cross-pair similarity • Detect mismatch (for non-entailment) • Logical interpretation and inference

  25. Dominant approach: Supervised Learning • Features model various aspects of similarity and mismatch • Classifier determines relative weights of information sources • Train on development set and auxiliary t-h corpora Similarity Features:Lexical, n-gram,syntactic semantic, global Classifier YES t,h NO Feature vector

  26. ROOT ROOT i i rainverb leftverb wha subj itother Marynoun whenadj i leaveverb subj Marynoun ROOT ROOT i i V2verb V1verb wha N1noun N2noun whenadj conj i N2noun V2verb Parse-based Proof Systems It rained when John and Mary left It rained when Mary left Mary left   ROOT i rainverb expletive expletive wha itother whenadj i leaveverb subj Johnnoun conj Marynoun (Bar-Haim et al., RTE-3)

  27. Resources • WordNet, Extended WordNet, distributional similarity • Britain  UK • steal  take • DIRT (paraphrase rules) • X file a lawsuit against Y  X accuse Y (world knowledge) • X confirm Y  X approve Y (linguistic knowledge) • FrameNet, ProBank, VerbNet • For semantic role labeling • Entailment pairs corpora • Automatically acquired training • No dedicated resources for entailment yet

  28. Accuracy Results – RTE-1

  29. Results (RTE-2) Average: 60% Median: 59%

  30. Results: RTE-3 Two systems above 70% Most systems (65%) in the range 60-70%; they were just 30% at RTE-2

  31. Current Limitations • Simple methods perform quite well, but not best • System reports point at: • Lack of knowledge (syntactic transformation rules, paraphrases, lexical relations, etc.) • Lack of training data • It seems that systems that coped better with these issues performed best: • Hickl et al. - acquisition of large entailment corpora for training • Tatu et al. – large knowledge bases (linguistic and world knowledge)

  32. Impact • High interest in the research community • Papers, conference sessions and areas, PhD theses, funded projects • Special issue - Journal of Natural Language Engineering • ACL-07 tutorial • Initial contribution to specific applications • QA – Harabagiu & Hickl, ACL-06; CLEF-06/07 • RE – Romano et al., EACL-06 • RTE-4 – by NIST, with CELCT • Within TAC, a new semantic evaluation conference (with QA and summarization, subsuming DUC)

  33. New Potentials for Machine Learning

  34. Classical Approach = Interpretation Stipulated Meaning Representation(by scholar) Variability Language(by nature) • Logical forms, word senses, semantic roles, named entity types, … - scattered tasks • Feasible/suitable framework for applied semantics?

  35. Textual Entailment = Text Mapping Assumed Meaning (by humans) Variability Language(by nature)

  36. General Case – Inference MeaningRepresentation Inference Interpretation Language Textual Entailment • Entailment mapping is the actual applied goal - and also a touchstone for understanding! • Interpretation becomes a possiblemean

  37. Machine Learning Perspectives • Issues with interpretation approach: • Hard to agree on target representations • Costly to annotate semantic representations for training • Has it been a barrier? • Language-level entailment mapping refers to texts • Texts are semantic-theory neutral • Amenable for unsupervised/semi-supervised learning • It would be interesting to explore (many do) • language-based representations of meaning, inference knowledge, and ontology, • for which learning and inference methods may be easier to develop. • Artificial intelligence through natural language?

  38. Major Learning Directions • Learning entailment knowledge (!!!) • Learning entailment relations between words/expressions • Integrating with manual resources and knowledge • Inference methods • Principled frameworks for probabilistic inference • Estimate likelihood of deriving hypothesis from text • Fusing information levels • More than bags of features • Relational learning relevant for both • How can we increase ML researchers involvement?

  39. own/v ? entails purchase/n entails WN-syn buy/v ? ? acquire/v acquisition/n Dist. sim derived Learning Entailment Knowledge • Entailing “topical” terms from words/texts • E.g. medicine, law, cars, computer security, … • An unsupervised version of text categorization • Learning entailment graph for terms/expressions • Partial knowledge: statistical, lexical resources, Wikipedia, … • Estimate link likelihood in context

  40. Meeting the knowledge challenge – by a coordinated effort? • A vast amount of “entailment rules” needed • Speculation: can we have a joint community effort for knowledge acquisition? • Uniform representations • Mostly automatic acquisition (millions of rules) • Human Genome Projectanalogy • Preliminary: RTE-3 Resources Pool at ACLWiki(set up by Patrick Pantel)

  41. Textual Entailment ≈Human Reading Comprehension • From a children’s English learning book(Sela and Greenberg): Reference Text:“…The Bermuda Triangle lies in the Atlantic Ocean, off the coast of Florida. …” Hypothesis (True/False?):The Bermuda Triangle is near the United States ???

  42. Where are we (from RTE-1)?

  43. Cautious Optimism • Textual entailment provides a unified framework for applied semantics • Towards generic inference “engines” for applications • Potential for: • Scalable knowledge acquisition, boosted by (mostly unsupervised) learning • Learning-based inference methods Thank you!

  44. Summary: Textual Entailment as Goal • The essence of our proposal: • Base applied inference on entailment “engines” and KBs • Formulate various semantic problems as entailment tasks • Interpretations and “mapping” methods may compete/complement • Open question: which inferences • can be represented at language level? • require logical or specialized representation and inference? (temporal, spatial, mathematical, …)

  45. Collecting QA Pairs • Motivation: a passage containing the answer slot filler should entail the corresponding answer statement. • E.g. for: Who invented the telephone?, and answer Bell,text should entail Bell invented the telephone • QA systems were given TREC and CLEF questions. • Hypothesis generated by “plugging” the system answer term into the affirmative form of the question • Texts correspond to the candidate answer passages

  46. Collecting IE Pairs • Motivation: a sentence containing a target relation instance should entail an instantiated template of the relation • E.g: X is located in Y • Pairs were generated in several ways • Outputs of IE systems: • for ACE-2004 and MUC-4 relations • Manually: • for ACE-2004 and MUC-4 relations • for additional relations in news domain

  47. Collecting IR Pairs • Motivation: relevant documents should entail a given “propositional” query. • Hypotheses are propositional IR queries, adapted and simplified from TREC and CLEF • drug legalization benefits  drug legalization has benefits • Texts selected from documents retrieved by different search engines

  48. Collecting SUM (MDS) Pairs • Motivation: identifying redundant statements (particularly in multi-document summaries) • Using web document clusters and system summary • Picking for hypotheses sentences having high lexical overlap with summary • In final pairs: • Textsare original sentences (usually from summary) • Hypotheses: • Positive pairs: simplifyh until entailed by t • Negative pairs: simplifyh similarly • In RTE-3: using Pyramid benchmark data

More Related