1 / 20

Final Project of Information Retrieval and Extraction

Final Project of Information Retrieval and Extraction. by d93921022 吳蕙如. OS : Linux 7.3 CPU : C800Mhz Memory : 128 MB Tool used : stopper stemmer trec_eval sqlite. Language used : shell script : control the inverted file indexing procedures

leora
Download Presentation

Final Project of Information Retrieval and Extraction

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Final Project of Information Retrieval and Extraction by d93921022 吳蕙如

  2. OS : Linux 7.3 CPU : C800Mhz Memory : 128 MB Tool used : stopper stemmer trec_eval sqlite Language used : shell script : control the inverted file indexing procedures AWK : used for extract needed part from documents sql : used while trying to adopt the file format database - sqlite. Working Environment

  3. FBIS Source Files Documents Separation18’51” + 55’13” Documents Pass Stemmer33’52” + 1:00’58” Documents Pass Stopper33’23” + 1:09’29” Words sort by AWK44’07” + 1:19’09” Term Frequency Count and Inverted File Indexing (one file per word)> 9hours, never finished While considering about the indexing procedures, the most directly way is doing it step by step. So in the first trial, I did each step and save the result as input of next step. However, as the directory size grew, the time cost to write a file increased out of control. Time cost of index file generating seems unacceptable and was stopped after 9 hours. First Indexing Trial

  4. FBIS Source Files Documents Separation23’29” + 58’36” Documents Pass Stemmer30’05” + 1:07’26” Documents Pass Stopper22’34” + 52’29” Words Sort by AWK22’44” + 48’27” Words Count and Indexing Two Suffix Directory Separating 5” Word Files Indexing12:41’00” + break The index generating took too much time. This seemed to be caused by the number of files in a directory. So, I tried to set up 26*26 sub directories basing on the first two characters of each words and separated the index files storage. However, it still took so long, and this trial was stopped while finishing FBIS3 after almost 13 hours. Second Indexing Trial

  5. FBIS Source Files Documents Separation 20’15” + 1:09’38” Documents Pass Stemmer 29’25” + 55’42” Documents Pass Stopper and Sort34’17” + 1:05’48” Words Count and Indexing Suffix Directory Separating 6” Word Files Indexing(break after 11 hours) Well, before finding out a way to solve time consuming problem of indexing, the steps before also cost a lot of time. I tried to combine the steps with pipeline command, but only worked when using system sort command. After using stopper | sort step, at least one hour is saved. Time cost is still far from acceptable. Third Indexing Trial

  6. FBIS Source Files 33’51” + 1:00’38” Documents Separation Documents Pass Stemmer Documents Pass Stopper and Sort Words Count and Indexing Suffix Directory Separating 2” Word Files Indexing 13:14’23” + 14:15’12” I finally found out the time was mostly cost on searching the location for next writing, which is a space allocation characteristic of linux systems. So, I combined the former steps by doing a run from per source file to the sorted ones. All middle files are removed as soon as used by the next part. The time consuming decreased amazingly. It only cost one-third of time used in last trail. Indexing was finished for the first time after 29 hours. Fourth Indexing Trial

  7. For Each FBIS Source File1:10’26” + 1:19’29” Documents Separation Documents Pass Stemmer Documents Pass Stopper and Sort Words Count and Database Indexing The indexing took just so long and I really want to find a way for decreasing the time cost. A file format database may be a solution. So, I adopt sqlite and write all my index lines as table rows into a file using sqlite. The time cost was immediately down to totally two and half hours, how amazing. Fifth Indexing Trial

  8. For Each FBIS Source File 1:08’53” + 1:16’39” v.s. 2:22’57”document count 61578  130417 v.s. 130417 (same)file size 262877184  542937088 v.s. same Documents Separation Documents Pass Stemmer Documents Pass Stopper and Sort Words Count and Database Indexing Since the whole indexing can be done in 2.5 hours, I then tried to count the level influence. I tried to index FBIS3 then FBIS4 separately, then combined them as a set and tried again. The time costs were nearly the same, and the document counts and file sizes were all equaled. This is not at all surprising because of the working procedure did not add any outside information in. Indexing - Level Analysis

  9. For Each FBIS Source File35’49” + 39’47”33’04” + 35’43”file size 176340992  365469696 Documents Separation Documents Pass Stemmer Documents Pass Stopper and Sort Words Count and Write in Single Indexing File While revisiting the fourth and fifth trial, I figured out maybe the problem is the number of index files. So I tried to write all the indexing message into a single file. Two sub part were tried : Write after counting term frequency of each word. Append after compute all frequency of a document. Sixth Indexing Trial

  10. For Each FBIS Source File44’38” + 50’32”file number 646  655total file size 178606080  367759360 Documents Separation Documents Pass Stemmer Documents Pass Stopper and Sort Words Count and write into 26*26 Indexing File When consider about query and indexing, single index file is just to large and would cost a long time to search for wanted terms. So, I modified the final step and write the index lines into different files based on the word suffix. Seventh Indexing Trial

  11. indexing FBIS 3 FBIS 4 total trial 1 18’51”+33’52”+33’23”+44’07”+? >> 2:10’13” 55’13”+1:00’58”+1:09’29”+1:19’09”+? >> 4:24’49” >> 6:35’02 trial 2 23’29”+30’05”+22’34”+22’44” +5”+12:41’00”=14:19’57” 58’36”+1:07’26”+52’29”+48’27”+? >> 3:46’58” >> 18:06’55” trial 3 20’15”+29’25”+34’17”+6”+? >> 1:24’03” 1:09’38”+55’42”+1:05’48”+? >> 3:11’08” >> 4:35’11” trial 4 33’51”+13:14’23”=13:48’14” 1:00’38”+14:15’12”=15:15’50” 29:04’04” trial 5 1:10’26” 1:19’29” 2:29’55” trial 6-1 35’49” 39’47” 1:15’36” trial 6-2 33‘04“  35’49“ 1:08’47“ trial 7 44’38“ 50’32“ 1:35’10“ Indexing Time

  12. Extract Topics from Source Files and Pass Stemmer and Stopper 1” Select Per Keyword Data from Index Database or Index file Weight Computing Ranking and Filtering Evaluation Five query topics, totally 15 keywords Total time to query : Index database : 13’38”  31’27 Single index file : 9’00”  18’39” Separated index file :2’ 04” Seems not efficient enough. If exam several terms together, more time should be saved. First Topic Query

  13. Extract Topics from Source Files and Pass Stemmer and Stopper Generate One Query Strings for each topic Select Data from Index Database or Index File Weight Computing Ranking and Filtering Evaluation Total time to query : Index database : 2’30”  5’19” Single index file : 2’26”4’55” Separated index file :not much progress expected, for the queried file need to be checked separately. But, as query terms increase, using separated index file would save a lot more search time. Second Topic Query

  14. Extract Topics from Source Files and Pass Stemmer and Stopper Generate Query Strings based on frequency of each term Select Data from Index Database or Index File Weight Computing Ranking and Filtering Evaluation Some of the terms in the topics seem to get far too much return documents and seem not work at all. Check the document frequency of each terms and removed the high frequency (>10%) terms. Did not work, some more related terms need to be used for better precision. Updated Topic Query

  15. Select Some Terms based on Descriptions, Narratives and web queries for each topic Order these terms based on document frequency of each word Deciding the Number of Terms to Use and Generate Query Strings The Following Steps are same as before Number of terms are tried from five to 100. The precision increase only in the beginning of adding terms. While the query time raise proportionally as the query terms increase. Terms of high frequency were removed, threshold were 10% and 20%. More strict frequency limit (10%) seem to help. Frequency Term Query

  16. Query : Topic

  17. Query : Updated Topic

  18. Query : Terms

  19. Query Time

  20. As I examined the index file and term frequency I generated. I found that there are so many terms seem to be useless. They may be meaningless, like “aaaf”, or wrong spelling, like “internacion”. Some terms have frequency count less than three. If these terms are removed, the query would be doing even faster, I suppose. I could have spent more time to sort and index the inverted file. However, when I tried part of this, the time consuming made me consider about if it is worthwhile. Maybe just a recent query cache is better than a full sort process. Well, this makes the end of my project report. Conclusion

More Related