1 / 45

What is happening at NTCIR

What is happening at NTCIR. Noriko Kando National Institute of Informatics, Japan http://research.nii.ac.jp/ntcir/ kando@nii. ac. jp. Thanks for Teruko Mitamura, Tetsuya Sakai, Fred Gey, Yohei Seki, Daisuke Ishikawa, Atsushi Fujii, Hidetsugu Nanba, Terumasa Ehara for preparing slides.

tarala
Download Presentation

What is happening at NTCIR

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. What is happening at NTCIR Noriko Kando National Institute of Informatics, Japan http://research.nii.ac.jp/ntcir/ kando@nii. ac. jp Thanks for Teruko Mitamura, Tetsuya Sakai, Fred Gey, Yohei Seki, Daisuke Ishikawa, Atsushi Fujii, Hidetsugu Nanba, Terumasa Ehara for preparing slides Noriko Kando

  2. NTCIR: NII Test Collection for Information Retrieval Research Infrastructure for Evaluating IA A series of evaluation workshops designed to enhance research in information-access technologies by providing an infrastructure for large-scale evaluations. ■Datasets, evaluation methodologies, and forum Project started in late 1997 • Once every 18 months Data sets (Test collections or TCs) • Scientific, news, patents, and web • Chinese, Korean, Japanese, and English Tasks (Research Areas) • IR: Cross-lingual tasks, patents, web, Geo • QA:Monolingual tasks, cross-lingual tasks • Summarization, trend info., patent maps • Opinion analysis, text mining Community-based Research Activities Noriko Kando

  3. Tasks at Past NTCIRs Noriko Kando

  4. NTCIR-8 Tasks (2008.07—2009.06) The 3rd Int’l W on Evaluating Information Access (EVIA )refereed • Advanced CL Info Access • - QA(CsCtJE->CsCtJ)  ※Any Types of Questions • - IR for QA (CsCtJE->CsCtJ )   New GeoTime (E, J) Geo Temporal Information 2. User Generated Contents (CGM) - Opinion Analysis(Multilingual News)(Blog) - [Pilot]Community QA (Yahoo! Chiebukuro) New • 3. Focused Domain:Patent • -Patent Translation; English -> Japanese • ※Statistical MT, The World-Largest training data (J-E sentence • alignment), Summer School, Extrinsic eval by CLIR • -Patent Miningpapers -> IPC • Evaluation of SMT New Noriko Kando

  5. NTCIR-7 & -8 Program Committee Mark Sanderson, Doug Oard, Atsushi Fujii, Tatsunori Mori, Fred Gey, Noriko Kando (and Ellen Voorhees, Sung Hyun Myaeng, Hsin-Hsi Chen, Tetsuya Sakai) Noriko Kando

  6. NTCIR-8 Coordination • NTCIR-8 is coordinated by NTCIR Project at NII, Japan. The following organizations contribute to the organization of NTCIR-8 as Task Organizers - Academia Sinica - Carnegie Mellon Univ - Chinese Academy of Science - Hiroshima City University - Hitachi, Co Ltd. - Hokkai Gakuen University - IBM - Microsoft Research Asia - National Institute of Information and Communication Technology - National Institute of Informatics - National Taiwan Univ • -National Taiwan Ocean Univ • - Oki Electonic Co. • - Tokyo Institute of Technology • - Tokyo Univ • - Toyohashi Univ of Technology and Science • - Univ of California Barkeley • Univ of Tsukuba • Yamanashi Eiwa College • Yokohama National University Noriko Kando

  7. [GeoTime] Dublin City Univ Hokkaido Univ INESC-ID, Porugal International Inst of Technology, Hyderbad Kieo Univ Nataional Inst of Materials Science Osaka Kyoiku Univ Univ California, Berkeley Univ of Iowa Univ of Lisbon Yokohama City Univ [MOAT] Beijing Uni of Posts and Telecommunications Chaoyang Univ of Technology Chinese Univ of HK+ Tsinghua Univ City Univ of Hong Kong (2 groups) Hong Kong Polytechnic Univ KAIST National Taiwan Univ NEC Laboratories China Peking Univ Pohang Univ of Sci and Tech SICS Toyohashi Univ of Technology Univ of Alicante Univ of Neuchatel Yuan Ze Univ [Patent Mining] Hiroshima City Univ Hitachi, Ltd. IBM Japan, Ltd. Institute of Scientific and Technical Information of China KAIST National Univ of Singapore NEC Shanghai Jiao Tong Univ Shenyang Institute of Aeronautical Engineering Toyohashi Univ of Technology Univ of Applied Sciences - UNIGE [Patent Translation] Dublin City University, CNGL Hiroshima City University Kyoto University NiCT Pohang Univ of Sci and Tech tottori university Toyohashi University of Technology Yamanashi Eiwa College [Community QA] Microsoft Research Asia National Institute of Informatics Shirayuri College [CCLQA] Carnegie Mellon Univ Dalian Univ of Technology National Taiwan Ocean Univ Shenyan Institute of Aeronautical Engineering Univ of Tokushima Wuhan Univ [IR4QA] Carnegie Mellon Univ Chaoyang Univ of Technology Dalian Univ of Technology Dublin City Univ Inner Mongolia Univ Queensland Univ of Technology Shenyan Inst of Aeronautical Engineering Trinity College Dublin Univ California, Berkeley Wuhan Univ Wuhan Univ (Computer School) Wuhan Univ of Science and Technology NTCIR-8 Active Participants Noriko Kando

  8. Complex Cross-lingual Question Answering (CCLQA) Task Different teams can exchange and create a “dream-team” QA system Small teams that do not possess an entire QA system can contribute IR and QA communities can collaborate http://aclia.lti.cs.cmu.edu/ntcir8/ Noriko Kando

  9. Evaluation Topics – any types of questions - Noriko Kando

  10. CT/JA-T IR4QA run rankings Mean Q Mean nDCG Mean Q Mean nDCG Noriko Kando

  11. CCLQA Human Evaluation Preliminary Results: JA-JA JA-JA automatic evaluation Noriko Kando

  12. Effect of Combination;IR4QA+CCLQA JA-JA Collaboration Track: F3 score based on automatic evaluation Noriko Kando

  13. NTCIR-GEOTIME GEOTEMPORAL INFORMATION RETRIEVAL (New Track in NTCIR Workshop 8) Fredric Gey and Ray Larson and Noriko Kando Relevance judgments system by Jorge Machado and Hideki Shima Evaluation : Tetsuya Sakai Judgments: U Iowa, U Lisbon, U California Barkelay, NII Search with a specific focus on Geography + To distinguish from past GIR evaluations, we introduced a temporal component Asian language geographic search has not previously been evaluated, even though about 50 percent of the NTCIR-6 Cross-Language topics had a geographic component (usually a restriction to a particular country). Noriko Kando

  14. NTCIR-GeoTime PARTICIPANTS • Japanese Runs submitted by eight groups (two anonymous pooling supporters) Noriko Kando

  15. NTCIR-GeoTime PARTICIPANTS • English Runs submitted by six groups Noriko Kando

  16. NTCIR-GeoTime Approached • BRKLY: baseline approach, probablistic + psued relevance feedback • DCU, IITH, XLDB (U Lisbon) : geographic enhancements • KOLIS (Keio U) : counting the number of geographic and temporal expression in top-ranked docs in initial search, then re-rank • FORST (Yokohama Nat U): utilize factoid QA technique to question decomposition • HU-KB (Hokkaido U), U Iowa: Hybrid approach combinging probablistic model and weighted boolean query formulation Noriko Kando

  17. NTCIR-GeoTime: ENGLISH TOPIC DIFFICULTY by Average Precision Per-topic AP, Q and nDCG averaged over 25 English runs for 25 topics sorted by topic difficulty (AP ascending) Most Difficult English topic (21): When and where were the 2010 Winter Olympics host city location announced? Noriko Kando

  18. NTCIR-GeoTime: JAPANESE TOPIC DIFFICULTY by Average Precision Per-topic AP, Q and nDCG averaged over 34 Japanes runs for 24 topics sorted by topic difficulty (AP ascending) Most Difficult Japanese topic 18: What date was a country was invaded by the United States in 2002? Noriko Kando

  19. NTCIR-GeoTime: TOPIC per TOPIC Analysis on Japanese Noriko Kando

  20. NTCIR-GeoTime CHALLENGES • Geographic reference resolution is difficult enough, but • More difficult to process temporal expression (“last Wednesday”) references • Can indefinite answers be accepted (“a few hours”)? • Need Japanese Gazetteers • Need NE annotated corpus for further refinement Noriko Kando

  21. Multilingual Opinion Analysis (MAOT) CO2 Reduction? Lehman shock? Noriko Kando

  22. Opinion Question List Noriko Kando

  23. Effective approaches • The following teams attained good results with accomplished feature filtering, hot machine learning, and rich lexicon resources. Noriko Kando

  24. Community QA Pilot Task • Rank all posted answers by answer quality • (as estimated by system) for every question. Best Answer: Questioner selected Good Answers: 4 assessors individually assessed Training Test Yahoo Chiebukuro data version 1.0: 3 million questions Test collection for CQA: 1500 questions 1500 questions selected at random http://research.nii.ac.jp/ntcir/ntcir-ws8/yahoo/index-en.html • 4 university students Noriko Kando

  25. GA-Hit@1 not useful 5th run from MSRA used the BA info for test Qs directly so does not represent Practical performance GA-nDCG and Q similar Noriko Kando

  26. LOVE is HARD according to BA! Systems can’t find the asker’s BAs! Noriko Kando

  27. LOVE is EASY according to GA! Systems can find many good answers that are not BA! (BA-evaluation not good enough for social questions) #questions GA-nG@1 Noriko Kando

  28. Goal: Automatic Creation of technical trend maps from a set of research papers and patents. Research papers and patents are classified in terms of elemental technologies and their effects. Noriko Kando

  29. Evaluation Subtask 1 (Research Paper Classification) Metrics: Mean Average Precision (MAP) • k-NN based approach is superior to machine learning approach. • Re-ranking of IPC codes is effective. Subtask 2: Technical Trend Map Creation Metrics: Recall, Precision, and F-measure • Top systems employed CRF, and the following features are effective. • Dependency structure • Document structure • Domain adaptation Noriko Kando

  30. History of Patent IR at NTCIR 2 years of JPO patent applications • NTCIR-3 (2001-2002) • Technology survey • Applied conventional IR problems to patent data • NTCIR-4 (2003-2004) • Invalidity search • Addressed patent-specific IR problems • NTCIR-5 (2004-2005) • Enlarged invalidity search • NTCIR-6 (2006-2007) • Added English patents * JPO = Japan Patent Office 5 years of JPO patent applications Both document sets were published in 1993-2002 10 years of JPO patent applications 10 years of USPTO patents granted * USPTO = US Patent & Trademark Office Noriko Kando

  31. Extrinsic E-J: BLEU & IR measures MAP & Recall@N Recall@100,200 are highly correlated with BLEU (R = 0.86) Noriko Kando

  32. Overall structure

  33. New NTCIR Structure NTCIR general chairs: Noriko Kando (NII), Eiichiro Sumita (NICT), Tsuneaki Kato (U of Tokyo) NTCIR evaluation chairs: Hideo Joho (U of Tsukuba) Tetsuya Sakai (MSRA) EVIA chairs: Mark Sanderson (RMIT) William Webber (U of Melbourne) + 1 Task Selection Committee EVIA Program Committee Grand challenges Refereed papers on Evaluating Information Access Core challenge task Core challenge task Pilot task Pilot task

  34. Tasks accepted for NTCIR-9 CORE TASKS • Intent (with One-Click Access subtask) • Recognizing Natural Language Inference • Geotemporal information retrieval (GeoTime2) • IR for Spoken Documents PILOT TASKS • Cross-lingual Link Discovery • Evaluation Framework of Interactive Information Access • Patent Machine Translation (PATMT)

  35. INTENT (w 1CLICK) task (CS, J) Organisers: Min Zhang, Yiqun Liu (Tsinghua U), Ruihua Song, Tetsuya Sakai, Youngin Song (MSRA), Makoto Kato (Kyoto U) • Subtopic mining Find different intents given an ambiguous/underspecified query • Ranking Selectively diversity Web search results • One click access (1CLICK) Satisfy information need with the system’s first result page (X-byte text, not document list) Harry Potter book book book movie No need to click after clicking on SEARCH

  36. Recognizing Natural Language Inference (CS, CT, J) Organisers: Hideki Shima, Teruko Mitamura (CMU), Chuan-Jie Lin (NTOU), Cheng-Wei Lee (Academia Sinica) • YES/NO task Does text 1 entail text 2? text 1 ⇒ text 2 • Main task Given text 1 and text 2, choose from • Forward entailment (2) backward entailment (3) equivalent (4) contradict (5) independent • Extrinsic evaluation via CLIA will be conducted

  37. GeoTime2 (E, J, K?) Organisers: Fred Gey, Ray Larson (UCB), Noriko Kando (NII) • Second round of ad hoc IR for WHEN and WHERE • GeoTime1 topic: How old was Max Schmeling when he died, and where did he die? At GeoTime1, docs that contain both WHEN and WHERE were treated as relevant; Those that contain only WHEN or only WHERE info were treated as partially relevant. Can we do better? Can we create more realistic GeoTime topics?

  38. IR for Spoken Documents (J) Organisers: Tomoyosi Akiba, Seiichi Nakagawa, Kiyoaki Aikawa (Toyohashi U of Technology), Yoshiaki Itoh (Iwate Prefectural U), Tatsuya Kawahara (Kyoto U), Xinhui Hu (NICT), Hiroaki Nanjo (Ryukoku University), Hiromitsu Nishizaki (U of Yamanashi), Tomoko Matsui (Institute of Statistical Mathematics), Yoichi Yamashita (Ritsumeikan U) • Handling spontaneous speech data (spoken lectures) • Spoken term detection Find the position of a given query term within SD • Spoken document retrieval Find passages in SD data that match a given query Reference speech recognition results provided (non-speech people can easily participate)

  39. Cross-lingual Link Discovery (E->C,K,J) Organisers: Ling-Xiang Tang, Shlomo Geva, Andrew Trotman, Yue Xu, Darren Huang (Queesland U of Technology), Andrew Trotman (U of Otago) Linking INEX and NTCIR! • Given an English Wikipedia page, • Identify anchors; and • For each anchor, provide a list of C/K/J documents to be linked 太陽 (2) Solar eclipse 日食 太陽 (1) sun Corresponding W entry 月食 Lunar eclipse 月食

  40. Evaluation Framework of Interactive Information Access (J) Organisers: Tsuneaki Kato (U Tokyo) and Mitsunori Matsushita (Kansai U) • Explore evaluation of interactive and exploratory information access • Shared modules, shared interfaces between modules • Interactive complex question answering subtask (visual/textual output) • Trend information summarization subtask (e.g. cabinet support rate change)

  41. PATMT (C-E, J-E, E-J) Organisers: Benjamin Tsou, Bin Lu (CUHK), Isao Goto (NICT) • C-E MT (new at NTCIR-9), J-E MT, E-J MT Manual and automatic evaluation (BLEU, NIST) • PATMT@NTCIR-10 will include extrinsic evaluation by CLIR • Lessons from PATMT@NTCIR-7,8: Human rating High Low MAP BLEU High with multiple reference translations (RTs)

  42. TALK OUTLINE • About myself • Looking Back - NTCIR • NTCIR-5,6 CLIR (as a participant) • NTCIR-7,8 ACLIA • NTCIR-8 GeoTime • Looking Forward -NTCIR Grand Challenges • NTCIR-9 Tasks • Summary

  43. TALK SUMMARY • Looking back (1999-2010) - More teams are now using straight MT for CLIR. Often over 90% of monolingual performance. What are CLIR researchers doing now? CLIA? • Looking forward (2010-) • Grand challenges [tentative] = NN2S + E2E = No Need To Search + Easy To Explore. • Multilinguality will still be important especially in the system2user direction. Please participate in NTCIR-9 tasks!

  44. NTCIR-9 Important Dates (tentative) 1st Oct 2010 first call for participation Winter-Autumn 2011 tasks run November 2011 camera-ready papers due December 2011 NTCIR-9@NII, Tokyo Watch http://research.nii.ac.jp/ntcir/

  45. Thanks Merci Danke schön Gracie Gracias Ta! Tack Köszönöm Kiitos Terima Kasih Khap Khun Ahsante Tak 謝謝 ありがとう http://research.nii.ac.jp/ntcir/ Will be moved to: http://ntcir.nii.ac.jp/ Noriko Kando

More Related