1 / 30

Information Distance

Information Distance. More Applications. 1. Information Distance from a Question to an Answer. Question & Answer. Practical concerns: Partial matching does not satisfy triangle inequality.

zaynah
Download Presentation

Information Distance

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Information Distance More Applications

  2. 1. Information Distance from a Question to an Answer

  3. Question & Answer • Practical concerns: • Partial matching does not satisfy triangle inequality. • When x is very popular, and y is not, x contains a lot of irrelevant information w.r.t. y, then C(x|y) << C(y|x), and d(x,y) prefers y. • dmax does not satisfy universality. • Neighborhood density -- there are answers that are much more popular than others. • Nothing to compress: a question and an answer.

  4. Partial matching Triangle inequality does not hold: d(man,horse) ≥ d(man, centaur) + d(centaur, horse)

  5. Separate Irrelevant Information • In max theory, we wanted smallest p, converting x,y: • Now let’s remove redundant information from p: • We now wish to minimize q+s+t. x p y t s x q y

  6. The Min Theory (Li, Int’l J. TCS, 2007, Zhang et al, KDD’2007) • Emin (x,y) = smallest program p needed to convert between x and y, but keeping irrelevant information out from p. • Formally: Emin (x,y) =min{|p| : U(x,p,r)=y, U(y,p,q)=x, |p|+|q|+|r| ≤ E(x,y) } • All other development similar to E(x,y). Define: • Fundamental Theorem II: • Emin (x,y) = min { C(x|y), C(y|x) } min {C(x|y), C(y|x) } dmin (x,y) = ----------------------- min {C(x),C(y)}

  7. Other properties Theorem 1. dmin(x,y) ≤ dmax(x,y) Theorem 2. dmin(x,y) • is universal, • does not satisfy triangle inequality • is symmetric • has required density properties: good guys have more neighbors.

  8. How to approximate dmax(x,y), dmin(x,y) • Each term C(x|y) may be approximated by one of the following: • Compression. • Shannon-Fano code (Cilibrasi, Vitanyi): an object with probability p may be encoded by –logp + 1 bits. • Mixed usage of (1) and (2) – in question and answer application. This is especially useful for Q&A systems.

  9. Shannon-Fano Code • Consider n symbols 1,2, …, N, with decreasing probabilities: p1 ≥ p2 ≥, … ≥ pn. Let Pr=∑i=1..rpi. The binary code E(r) for r is obtained by truncating the binary expansion of Pr at length |E(r)| such that - log pr ≤ |E(r)| < -log pr +1 • Highly probably symbols are mapped to shorter codes, and 2-|E(r)| ≤ pr < 2-|E(r)|+1 • Near optimal: Let H = -∑rprlogpr --- the average number of bits needed to encode 1…N. Then we have - ∑rprlogpr ≤ H < ∑r (-log pr +1)pr = 1 - ∑rprlogpr

  10. Query-Answer System X. Zhang, Y. Hao, X. Zhu, M. Li, KDD’2007 • Adding conditions to normalized information distance, we built a Query-Answer system. • The information distance naturally measures • Good pattern matches – via compression • Frequently occurring items – via Shannon-Fano code • Mixed usage of the above two.

  11. Some comparisons • Benchmark is based on a common factoid QA test set of 109 questions that are provided by Text Retrieval Conference (TREC) sponsored by National Institute of Standard and Technology (NIST). • MRAR = (∑C(i)/i)/N = (C1 + C2*0.5 + C3*0.33 + C4*0.25 + C5*0.2)/109

  12. 2. Parameter-Free Data Mining (Keogh, Lonadi, Ratanamahatana, KDD’04) • Most data mining algorithms require setting many input parameters. Parameter-laden methods have 2 dangers: • Wrong parameter setting leads to errors. • Over fitting causes more problems. • Data mining algorithms should have ideally no parameters – thus, no prejudices, expectations, or presumptions. • Compared (a variant of) information distance with every time serious distance (51 of them) appeared in SIGKDD, SIGMOD, ICDM, ICDE, VLDB, ICML, SSDB, PKDD, PADDD during the previous decade.

  13. Experiment on • 18 pairs of time series • (length 1000 each): • Q = correct /total • Inf. Dist. Q=1 • ¾ of measures, Q=0 • Best of them, HMM, • Q=0.33 • Data cover: finance, science, medicine, industry • Data: all in SAX format

  14. Anomaly detection (KLR, KDD’04) – algorithm: use divide and conquer to find a region that have large inf. dist. from other parts. * All other methods produced wrong results – different from cardiologists.

  15. Anomaly Detection (KLR KDD’04)

  16. 3. Identifying Multiword Expressions (Fan Bu, Tsinghua University) • Multiword expressions (MWEs) appear frequently and ungrammatically in English. • An MWE is a sequence of neighboring words whose meaning cannot be derived from the meaning of its components. Example: Kolmogorov complexity • Automatically identifying MWEs is a major challenge in computational linguistics.

  17. Distance from an n-gram to its semantics • Given an n-gram, let me define the “semantics” of the n-gram to be the set of all web pages containing all the words in the n-gram. • The (plain) information distance simplifies to: Dmax(n-gram g, its semantics) = log (#pages(g)/#pages(g’s semantics))

  18. Experiment on1529 idioms, 1798 compositional phrases

  19. Complex name entity extraction:

  20. 4. Texture classification (Campana & Keogh, 2010) Why are we interested in this

  21. Their method • Used, as information distance: d(x,y) = [K(x|y) + K(y|x) ] / K(xy) • MPEG-1 format for images x, y. To do K(x|y), they created a movie of a pair of frames x and y. They used MPEG video compression (lossy).

  22. Note that the algorithm has no access to color or shape information, this clustering is based only on texture. Dictionnaire D'Histoire Naturelle by Charles Orbigny. 1849

  23. The algorithm can handle very subtle differences. Ornaments from the Hand-Press period. The particularity of this period is the use of block of wood to print the ornaments on the books. The specialist historians want to record the individual instances of ornament occurrence and to identify the individual blocks. This identification work is very useful to date the books and to authenticate outputs from some printing-houses and authors. Indeed, numerous editions published in the past centuries do not reveal on the title page their true origin. The blocks could be re-used to print several books, be exchanged between the printing-houses or duplicated in the case of damage. Mathieu Delalandre

  24. Egyptian  Clovis 

  25. 5. Gene Expression Dynamics (Nykter et al, PNAS, 2008) • Macrophage (white blood cells) • 94 microarrays, 9941 differentially expressed genes. • Computed Normalized information distance between two time-point measurements. • Observed the underlying dynamical network of the macrophage exhibits criticality. • Somebody figure out what this is about – and present in class.

  26. 6. Tree from metabolic network (Nykter et al, Phy. Rev. Lett. 2008) • Metabolic networks of 107 organisms from KEGG. • Normalized information distance is computed between each pair and a tree was generated.

  27. Red – Bacteria Blue – archaea Green -- eukaryotes

  28. Phylogenetic Compression (Ane-Sanderson, Syst. Biol. 2005) • Several interesting questions from bioinformatics: • Is the parsimony tree the best tree? • DNA sequence compression by sequence only can be optimal? • The authors propose a scheme to encode the tree and data simultaneously to minimize the descriptive complexity of tree(s) plus data. • Better compression • More economical than the parsimony tree.

More Related