1 / 92

Information Retrieval

Information Retrieval. CSE 8337 (Part E) Spring 2009 Some Material for these slides obtained from: Modern Information Retrieval by Ricardo Baeza -Yates and Berthier Ribeiro-Neto http://www.sims.berkeley.edu/~hearst/irbook/

lavey
Download Presentation

Information Retrieval

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Information Retrieval CSE 8337 (Part E) Spring 2009 Some Material for these slides obtained from: Modern Information Retrieval by Ricardo Baeza-Yates and BerthierRibeiro-Netohttp://www.sims.berkeley.edu/~hearst/irbook/ Data Mining Introductory and Advanced Topics by Margaret H. Dunham http://www.engr.smu.edu/~mhd/book Introduction to Information Retrieval by Christopher D. Manning, PrabhakarRaghavan, and HinrichSchutze http://informationretrieval.org

  2. CSE 8337 Outline • Introduction • Simple Text Processing • Boolean Queries • Web Searching/Crawling • Indexes • Vector Space Model • Matching • Evaluation • Feedback/Expansion

  3. Query Operations Introduction • IR queries as stated by the user may not be precise or effective. • There are many techniques to improve a stated query and then process that query instead.

  4. How can results be improved? • Options for improving result • Local methods • Personalization • Relevance feedback • Pseudo relevance feedback • Query expansion • Local Analysis • Thesauri • Automatic thesaurus generation • Query assist

  5. Relevance Feedback • Relevance feedback: user feedback on relevance of docs in initial set of results • User issues a (short, simple) query • The user marks some results as relevant or non-relevant. • The system computes a better representation of the information need based on feedback. • Relevance feedback can go through one or more iterations. • Idea: it may be difficult to formulate a good query when you don’t know the collection well, so iterate

  6. Relevance Feedback • After initial retrieval results are presented, allow the user to provide feedback on the relevance of one or more of the retrieved documents. • Use this feedback information to reformulate the query. • Produce new results based on reformulated query. • Allows more interactive, multi-pass process.

  7. Relevance feedback • We will use ad hoc retrieval to refer to regular retrieval without relevance feedback. • We now look at four examples of relevance feedback that highlight different aspects.

  8. Similar pages

  9. Relevance Feedback: Example • Image search engine

  10. Results for Initial Query

  11. Relevance Feedback

  12. Results after Relevance Feedback

  13. Initial query/results • Initial query: New space satellite applications 1. 0.539, 08/13/91, NASA Hasn’t Scrapped Imaging Spectrometer 2. 0.533, 07/09/91, NASA Scratches Environment Gear From Satellite Plan 3. 0.528, 04/04/90, Science Panel Backs NASA Satellite Plan, But Urges Launches of Smaller Probes 4. 0.526, 09/09/91, A NASA Satellite Project Accomplishes Incredible Feat: Staying Within Budget 5. 0.525, 07/24/90, Scientist Who Exposed Global Warming Proposes Satellites for Climate Research 6. 0.524, 08/22/90, Report Provides Support for the Critics Of Using Big Satellites to Study Climate 7. 0.516, 04/13/87, Arianespace Receives Satellite Launch Pact From Telesat Canada 8. 0.509, 12/02/87, Telecommunications Tale of Two Companies • User then marks relevant documents with “+”. + + +

  14. Expanded query after relevance feedback • 2.074 new 15.106 space • 30.816 satellite 5.660 application • 5.991nasa5.196eos • 4.196 launch 3.972 aster • 3.516 instrument 3.446arianespace • 3.004bundespost2.806ss • 2.790 rocket 2.053 scientist • 2.003 broadcast 1.172 earth • 0.836 oil 0.646 measure

  15. Results for expanded query 1. 0.513, 07/09/91, NASA Scratches Environment Gear From Satellite Plan 2. 0.500, 08/13/91, NASA Hasn’t Scrapped Imaging Spectrometer 3. 0.493, 08/07/89, When the Pentagon Launches a Secret Satellite, Space Sleuths Do Some Spy Work of Their Own 4. 0.493, 07/31/89, NASA Uses ‘Warm’ Superconductors For Fast Circuit 5. 0.492, 12/02/87, Telecommunications Tale of Two Companies 6. 0.491, 07/09/91, Soviets May Adapt Parts of SS-20 Missile For Commercial Use 7. 0.490, 07/12/88, Gaping Gap: Pentagon Lags in Race To Match the Soviets In Rocket Launchers 8. 0.490, 06/14/90, Rescue of Satellite By Space Agency To Cost $90 Million 2 1 8

  16. Relevance Feedback • Use assessments by users as to the relevance of previously returned documents to create new (modify old) queries. • Technique: • Increase weights of terms from relevant documents. • Decrease weight of terms from nonrelevant documents.

  17. Query String Revised Query ReRanked Documents 1. Doc1 2. Doc2 3. Doc3 . . 1. Doc2 2. Doc4 3. Doc5 . . 1. Doc1  2. Doc2  3. Doc3  . . Ranked Documents Query Reformulation Feedback Relevance Feedback Architecture Document corpus IR System Rankings

  18. Query Reformulation • Revise query to account for feedback: • Query Expansion: Add new terms to query from relevant documents. • Term Reweighting: Increase weight of terms in relevant documents and decrease weight of terms in irrelevant documents. • Several algorithms for query reformulation.

  19. Relevance Feedback in vector spaces • We can modify the query based on relevance feedback and apply standard vector space model. • Use only the docs that were marked. • Relevance feedback can improve recall and precision • Relevance feedback is most useful for increasing recall in situations where recall is important • Users can be expected to review results and to take time to iterate

  20. The Theoretically Best Query x x x x o x x x x x x x x o x o x x o x o o x x x non-relevant documents o relevant documents Optimal query

  21. Query Reformulation for Vectors • Change query vector using vector algebra. • Add the vectors for the relevant documents to the query vector. • Subtract the vectors for the irrelevant docs from the query vector. • This both adds both positive and negatively weighted terms to the query as well as reweighting the initial terms.

  22. Optimal Query • Assume that the relevant set of documents Cr are known. • Then the best query that ranks all and only the relevant queries at the top is: Where N is the total number of documents.

  23. Standard RocchioMethod • Since all relevant documents unknown, just use the known relevant (Dr) and irrelevant (Dn) sets of documents and include the initial query q. : Tunable weight for initial query. : Tunable weight for relevant documents. : Tunable weight for irrelevant documents.

  24.  Relevance feedback on initial query Initial query x x x o x x x x x x x o x o x x o x o o x x x x x known non-relevant documents o known relevant documents Revised query

  25. Positive vs Negative Feedback • Positive feedback is more valuable than negative feedback (so, set  < ; e.g.  = 0.25,  = 0.75). • Many systems only allow positive feedback (=0). Why?

  26. Ide Regular Method • Since more feedback should perhaps increase the degree of reformulation, do not normalize for amount of feedback: : Tunable weight for initial query. : Tunable weight for relevant documents. : Tunable weight for irrelevant documents.

  27. Ide “Dec Hi” Method • Bias towards rejecting just the highest ranked of the irrelevant documents: : Tunable weight for initial query. : Tunable weight for relevant documents. : Tunable weight for irrelevant document.

  28. Comparison of Methods • Overall, experimental results indicate no clear preference for any one of the specific methods. • All methods generally improve retrieval performance (recall & precision) with feedback. • Generally just let tunable constants equal 1.

  29. Relevance Feedback: Assumptions • A1: User has sufficient knowledge for initial query. • A2: Relevance prototypes are “well-behaved”. • Term distribution in relevant documents will be similar • Term distribution in non-relevant documents will be different from those in relevant documents • Either: All relevant documents are tightly clustered around a single prototype. • Or: There are different prototypes, but they have significant vocabulary overlap. • Similarities between relevant and irrelevant documents are small

  30. Violation of A1 • User does not have sufficient initial knowledge. • Examples: • Misspellings (Brittany Speers). • Cross-language information retrieval. • Mismatch of searcher’s vocabulary vs. collection vocabulary • Cosmonaut/astronaut

  31. Violation of A2 • There are several relevance prototypes. • Examples: • Burma/Myanmar • Contradictory government policies • Pop stars that worked at Burger King • Often: instances of a general concept • Good editorial content can address problem • Report on contradictory government policies

  32. Relevance Feedback: Problems • Long queries are inefficient for typical IR engine. • Long response times for user. • High cost for retrieval system. • Partial solution: • Only reweight certain prominent terms • Perhaps top 20 by term frequency • Users are often reluctant to provide explicit feedback • It’s often harder to understand why a particular document was retrieved after applying relevance feedback Why?

  33. Evaluation of relevance feedback strategies • Use q0 and compute precision and recall graph • Use qm and compute precision recall graph • Assess on all documents in the collection • Spectacular improvements, but … it’s cheating! • Partly due to known relevant documents ranked higher • Must evaluate with respect to documents not seen by user • Use documents in residual collection (set of documents minus those assessed relevant) • Measures usually lower than for original query • But a more realistic evaluation • Relative performance can be validly compared • Empirically, one round of relevance feedback is often very useful. Two rounds is sometimes marginally useful.

  34. Evaluation of relevance feedback • Second method – assess only the docs not rated by the user in the first round • Could make relevance feedback look worse than it really is • Can still assess relative performance of algorithms • Most satisfactory – use two collections each with their own relevance assessments • q0and user feedback from first collection • qmrun on second collection and measured

  35. Why is Feedback Not Widely Used? • Users sometimes reluctant to provide explicit feedback. • Results in long queries that require more computation to retrieve, and search engines process lots of queries and allow little time for each one. • Makes it harder to understand why a particular document was retrieved.

  36. Evaluation: Caveat • True evaluation of usefulness must compare to other methods taking the same amount of time. • Alternative to relevance feedback: User revises and resubmits query. • Users may prefer revision/resubmission to having to judge relevance of documents. • There is no clear evidence that relevance feedback is the “best use” of the user’s time.

  37. Pseudo relevance feedback • Pseudo-relevance feedback automates the “manual” part of true relevance feedback. • Pseudo-relevance algorithm: • Retrieve a ranked list of hits for the user’s query • Assume that the top k documents are relevant. • Do relevance feedback (e.g., Rocchio) • Works very well on average • But can go horribly wrong for some queries. • Several iterations can cause query drift. • Why?

  38. PseudoFeedback Results • Found to improve performance on TREC competition ad-hoc retrieval task. • Works even better if top documents must also satisfy additional boolean constraints in order to be used in feedback.

  39. Relevance Feedback on the Web • Some search engines offer a similar/related pages feature (this is a trivial form of relevance feedback) • Google • Altavista • But some don’t because it’s hard to explain to average user: • Yahoo • Excite initially had true relevance feedback, but abandoned it due to lack of use. α/β/γ ??

  40. Excite Relevance Feedback Spink et al. 2000 • Only about 4% of query sessions from a user used relevance feedback option • Expressed as “More like this” link next to each result • But about 70% of users only looked at first page of results and didn’t pursue things further • So 4% is about 1/8 of people extending search • Relevance feedback improved results about 2/3 of the time

  41. Query Expansion • In relevance feedback, users give additional input (relevant/non-relevant) on documents, which is used to reweight terms in the documents • In query expansion, users give additional input (good/bad search term) on words or phrases

  42. How do we augment the user query? • Manual thesaurus • E.g. MedLine: physician, syn: doc, doctor, MD, medico • Can be query rather than just synonyms • Global Analysis: (static; of all documents in collection) • Automatically derived thesaurus • (co-occurrence statistics) • Refinements based on query log mining • Common on the web • Local Analysis: (dynamic) • Analysis of documents in result set

  43. Local vs. Global Automatic Analysis • Local – Documents retrieved are examined to automatically determine query expansion. No relevance feedback needed. • Global – Thesaurus used to help select terms for expansion.

  44. Automatic Local Analysis • At query time, dynamically determine similar terms based on analysis of top-ranked retrieved documents. • Base correlation analysis on only the “local” set of retrieved documents for a specific query. • Avoids ambiguity by determining similar (correlated) terms only within relevant documents. • “Apple computer”  “Apple computer Powerbook laptop”

  45. Automatic Local Analysis • Expand query with terms found in local clusters. • Dl – set of documents retrieved for query q. • Vl – Set of words used in Dl. • Sl – Set of distinct stems in Vl. • fsi,j –Frequency of stem si in document dj found in Dl. • Construct stem-stem association matrix.

  46. w1 w2 w3 …………………..wn c11 c12 c13…………………c1n w1 w2 w3 . . wn c21 c31 . . cn1 Association Matrix cij: Correlation factor between stems siand stem sj fik: Frequency of term i in document k

  47. Normalized Association Matrix • Frequency based correlation factor favors more frequent terms. • Normalize association scores: • Normalized score is 1 if two stems have the same frequency in all documents.

  48. Metric Correlation Matrix • Association correlation does not account for the proximity of terms in documents, just co-occurrence frequencies within documents. • Metric correlations account for term proximity. Vi: Set of all occurrences of term i in any document. r(ku,kv): Distance in words between word occurrences kuand kv ( if kuand kv are occurrences in different documents).

  49. Normalized Metric Correlation Matrix • Normalize scores to account for term frequencies:

  50. Query Expansion with Correlation Matrix • For each term i in query, expand query with the n terms, j, with the highest value of cij(sij). • This adds semantically related terms in the “neighborhood” of the query terms.

More Related