1 / 125

Statistical Methods for Mining Big Text Data

Statistical Methods for Mining Big Text Data. ChengXiang Zhai Department of Computer Science Graduate School of Library & Information Science Institute for Genomic Biology Department of Statistics University of Illinois, Urbana-Champaign

harken
Download Presentation

Statistical Methods for Mining Big Text Data

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Statistical Methods for Mining Big Text Data ChengXiang Zhai Department of Computer Science Graduate School of Library & Information Science Institute for Genomic Biology Department of Statistics University of Illinois, Urbana-Champaign http://www.cs.illinois.edu/homes/czhai czhai@illinois.edu 2014 ADC PhD School in Big Data, The University of Queensland, Brisbane, Australia, July 14, 2014

  2. Rapid Growth of Text Information WWW Desktop Intranet Email Literature Blog/Tweets … How to help people manage and exploit all the information?

  3. Text Information Systems Applications How to discover patterns in text and turn text data into actionable knowledge? How to connect users with the right information at the right time? Mining Access Focus of this tutorial Select information Create Knowledge Add Structure/Annotations Organization

  4. Goal of the Tutorial • Brief introduction to the emerging area of applying statistical topic models to text mining (TM) • Targeted audience: • Practitioners working on developing intelligent text information systems who are interested in learning about cutting-edge text mining techniques • Researchers who are looking for new research problems in text data mining, information retrieval, and natural language processing • Emphasis is on basic concepts, principles, and major application ideas • Accessible to anyone with basic knowledge of probability and statistics Check out David Blei’s tutorials on this topic for a more complete coverage of advanced topic models: http://www.cs.princeton.edu/~blei/topicmodeling.html

  5. Outline • Background • Text Mining (TM) • Statistical Language Models • Basic Topic Models • Probabilistic Latent Semantic Analysis (PLSA) • Latent Dirichlet Allocation (LDA) • Applications of Basic Topic Models to Text Mining • Advanced Topic Models • Capturing Topic Structures • Contextualized Topic Models • Supervised Topic Models • Summary We are here

  6. What is Text Mining? • Data Mining View: Explore patterns in textual data • Find latent topics • Find topical trends • Find outliers and other hidden patterns • Natural Language Processing View: Make inferences based on partial understanding of natural language text • Information extraction • Question answering

  7. Applications of Text Mining • Direct applications • Discovery-driven (Bioinformatics, Business Intelligence, etc): We have specific questions; how can we exploit data mining to answer the questions? • Data-driven (WWW, literature, email, customer reviews, etc): We have a lot of data; what can we do with it? • Indirect applications • Assist information access (e.g., discover major latent topics to better summarize search results) • Assist information organization (e.g., discover hidden structures to link scattered information)

  8. Text Mining Methods • Data Mining Style: View text as high dimensional data • Frequent pattern finding • Association analysis • Outlier detection • Information Retrieval Style: Fine granularity topical analysis • Topic extraction • Exploit term weighting and text similarity measures • Natural Language Processing Style: Information Extraction • Entity extraction • Relation extraction • Sentiment analysis • Machine Learning Style: Unsupervised or semi-supervised learning • Mixture models • Dimension reduction This tutorial

  9. Outline • Background • Text Mining (TM) • Statistical Language Models • Basic Topic Models • Probabilistic Latent Semantic Analysis (PLSA) • Latent Dirichlet Allocation (LDA) • Applications of Basic Topic Models to Text Mining • Advanced Topic Models • Capturing Topic Structures • Contextualized Topic Models • Supervised Topic Models • Summary We are here

  10. What is a Statistical Language Model? • A probability distribution over word sequences • p(“Today is Wednesday”)  0.001 • p(“Today Wednesday is”)  0.0000000000001 • p(“The eigenvalue is positive”)  0.00001 • Context-dependent! • Can also be regarded as a probabilistic mechanism for “generating” text, thus also called a “generative” model Today is Wednesday Today Wednesday is … The eigenvalue is positive

  11. Why is a LM Useful? • Provides a principled way to quantify the uncertainties associated with natural language • Allows us to answer questions like: • Given that we see “John” and “feels”, how likely will we see “happy” as opposed to “habit” as the next word? (speech recognition) • Given that we observe “baseball” three times and “game” once in a news article, how likely is it about “sports”? (text categorization, information retrieval) • Given that a user is interested in sports news, how likely would the user use “baseball” in a query? (information retrieval)

  12. Source-Channel Framework for “Traditional” Applications of SLMs Transmitter (encoder) Noisy Channel Receiver (decoder) Source Destination X Y X’ P(X) P(X|Y)=? P(Y|X) (Bayes Rule) When X is text, p(X) is a language model Many Examples: Speech recognition: X=Word sequence Y=Speech signal Machine translation: X=English sentence Y=Chinese sentence OCR Error Correction: X=Correct word Y= Erroneous word Information Retrieval: X=Document Y=Query Summarization: X=Summary Y=Document This tutorial is about another type of applications of SLMs (i.e., topic mining)

  13. The Simplest Language Model(Unigram Model) • Generate a piece of text by generating each word INDEPENDENTLY • Thus, p(w1 w2 ... wn)=p(w1)p(w2)…p(wn) • Parameters: {p(wi)} p(w1)+…+p(wN)=1 (N is voc. size) • A piece of text can be regarded as a sample drawn according to this word distribution Wednesday P(“today is Wed”) = P(“today”)p(“is”)p(“Wed”) = 0.0002  0.001  0.000015 today … eigenvalue

  14. Text Generation with Unigram LM Text mining paper Food nutrition paper (Unigram) Language Model  p(w| ) Sampling Document d … text 0.2 mining 0.1 assocation 0.01 clustering 0.02 … food 0.00001 … Topic 1: Text mining Given , p(d| ) varies according to d Given d, p(d| ) varies according to  … food 0.25 nutrition 0.1 healthy 0.05 diet 0.02 … Topic 2: Health

  15. Estimation of Unigram LM … text ? mining ? assocation ? database ? … query ? … 10/100 5/100 3/100 3/100 1/100 (Unigram) Language Model  p(w|)=? Estimation Document text 10 mining 5 association 3 database 3 algorithm 2 … query 1 efficient 1 Total #words =100 Maximum Likelihood (ML) Estimator: (maximizing the probability of observing document D)

  16. Maximum Likelihood vs. Bayesian • Maximum likelihood estimation • “Best” means “data likelihood reaches maximum” • Problem: small sample • Bayesian estimation • “Best” means being consistent with our “prior” knowledge and explaining data well • Problem: how to define prior? In general, we consider distribution of , so a point estimate can be obtained in potentially multiple ways (e.g. mean vs. mode)

  17. Illustration of Bayesian Estimation Likelihood: p(X|) X=(x1,…,xN) Prior: p() 0: prior mode : posterior mode ml: ML estimate Posterior: p(|X) p(X|)p() 

  18. Computation of Maximum Likelihood Estimate Data: a document d with counts c(w1), …, c(wN), and length |d| Model: unigram LM with parameters ={i }; i =p(wi| ) Use Lagrange multiplier approach ML estimate = Normalized counts Set partial derivatives to zero Use

  19. Computation of Bayesian Estimate • ML estimator: • Bayesian estimator: • First consider posterior: • Then, consider the mean or mode of the posterior dist. • p(d| ) : Sampling distribution (of data) • P()=p(1 ,…,N) : our prior on the model parameters • conjugate = prior can be interpreted as “extra”/“pseudo” data • Dirichlet distribution is a conjugate prior for multinomial sampling distribution “extra”/“pseudo” word counts

  20. Computation of Bayesian Estimate (cont.) Posterior distribution of parameters: Thus the posterior mean estimate is: Compare this with ML estimate: Each word gets unequal extra “pseudo counts” based on prior Total “pseudo counts” for all words

  21. Unigram LMs for Topic Analysis d B C Text mining paper Computer Science Papers General Background English Text the 0.03 a 0.02 is 0.015 we 0.01 ... food 0.003 computer 0.00001 … text 0.000006 … the 0.031 a 0.018 … text 0.04 mining 0.035 association 0.03 clustering 0.005 computer 0.0009 … food 0.000001 … the 0.032 a 0.019 is 0.014 we 0.011 ... computer 0.004 software 0.0001 … text 0.00006 … Document LM: p(w|d) Background LM: p(w|B) Collection LM: p(w| C)

  22. Unigram LMs for Association Analysis What words are semantically related to “computer”? Topic LM: p(w|“computer”) Normalized Topic LM: p(w|“computer”)/p(w|B) the 0.032 a 0.019 is 0.014 we 0.008 computer 0.004 software 0.0001 … text 0.00006 computer 400 software 150 program 104 … text 3.0 … the 1.1 a 0.99 is 0.9 we 0.8 all the documents containing word “computer” the 0.03 a 0.02 is 0.015 we 0.01 ... computer 0.00001 … Background LM: p(w|B) B General Background English Text

  23. More Sophisticated LMs • Mixture of unigram language models • Assume multiple unigram LMs are involved in generating text data • Estimation of multiple unigram LMs “discovers” (recovers) latent topics in text • Other sophisticated LMs (see [Jelinek 98, Manning & Schutze 99, Rosenfeld 00]) • N-gram language models: p(w1 w2 ... wn)=p(w1)p(w2|w1)…p(wn|w1 …wn-1) • Remote-dependence language models (e.g., Maximum Entropy model) • Structured language models (e.g., probabilistic context-free grammar) Focus of this tutorial

  24. Evaluation of SLMs • Direct evaluation criterion: How well does the model fit the data to be modeled? • Example measures: Data likelihood, perplexity, cross entropy, Kullback-Leibler divergence (mostly equivalent) • Indirect evaluation criterion: Does the model help improve the performance of the task? • Specific measure is task dependent • For retrieval, we look at whether a model is effective for a text mining task • We hope an “improvement” of a LM would lead to better task performance

  25. Outline • Background • Text Mining (TM) • Statistical Language Models • Basic Topic Models • Probabilistic Latent Semantic Analysis (PLSA) • Latent Dirichlet Allocation (LDA) • Applications of Basic Topic Models to Text Mining • Advanced Topic Models • Capturing Topic Structures • Contextualized Topic Models • Supervised Topic Models • Summary We are here

  26. Document as a Sample of Mixed Topics [ Criticism of government response to the hurricane primarily consisted of criticism of its response to the approach of the storm and its aftermath, specifically in the delayed response ] to the [flooding of New Orleans. … 80% of the 1.3 million residents of the greater New Orleans metropolitan area evacuated ] …[ Over seventy countries pledged monetary donations or other assistance]. … • How can we discover these topic word distributions? • Many applications would be enabled by discovering such topics • Summarize themes/aspects • Facilitate navigation/browsing • Retrieve documents • Segment documents • Many other text mining tasks government 0.3 response 0.2... Topic 1 city 0.2new 0.1orleans 0.05 ... Topic 2 … donate 0.1relief 0.05help 0.02 ... Topic k is 0.05the 0.04a 0.03 ... Background k

  27. Simplest Case: 1 topic + 1 “background” d Assume words in d are from two distributions: 1 topic + 1 background (rather than just one) B Text mining paper General Background English Text B d the 0.03 a 0.02 is 0.015 we 0.01 ... food 0.003 computer 0.00001 … text 0.000006 … the 0.031 a 0.018 … text 0.04 mining 0.035 association 0.03 clustering 0.005 computer 0.0009 … food 0.000001 … How can we “get rid of” the common words from the topic to make it more discriminative? Document LM: p(w|d) Background LM: p(w|B)

  28. Background words w P(w|B)  Document d P(Topic) Topic words w 1- P(w| ) The Simplest Case: One Topic + One Background Model Assume p(w|B) and  are known  = assumed percentage of background words in d Topic choice Maximum Likelihood

  29. Understanding a Mixture Model the 0.2 a 0.1 we 0.01 to 0.02 … text 0.0001 mining 0.00005 … Suppose each model would be selected with equal probability =0.5 Known Background p(w|B) The probability of observing word “text”: p(“text”|B) + (1- )p(“text”| ) =0.5*0.0001 + 0.5* p(“text”| ) The probability of observing word “the”: p(“the”|B) + (1- )p(“the”| ) =0.5*0.2 + 0.5* p(“the”| ) Unknown query topic p(w|)=? “Text mining” … text =? mining =? association =? word =? … The probability of observing “the” & “text” (likelihood) [0.5*0.0001 + 0.5* p(“text”| )]  [0.5*0.2 + 0.5* p(“the”| )] How to set p(“the”|) and p(“text”|) so as to maximize this likelihood? assume p(“the”| )+p(“text”| )=constant  give p(“text”|) a higher probability than p(“the”|) (why?) B and are competing for explaining words in document d!

  30. ML Estimator Simplest Case Continued: How to Estimate ? the 0.2 a 0.1 we 0.01 to 0.02 … text 0.0001 mining 0.00005 … Observed words Known Background p(w|B) =0.7 Unknown query topic p(w|)=? “Text mining” … text =? mining =? association =? word =? … =0.3 Suppose we know the identity/label of each word ...

  31. Can We Guess the Identity? Identity (“hidden”) variable: zi{1 (background), 0(topic)} zi 1 1 1 1 0 0 0 1 0 ... Suppose the parameters are all known, what’s a reasonable guess of zi? - depends on  (why?) - depends on p(w|B) and p(w|) (how?) the paper presents a text mining algorithm the paper ... E-step M-step Initially, set p(w| ) to some random values, then iterate …

  32. An Example of EM Computation Expectation-Step: Augmenting data by guessing hidden variables Maximization-Step With the “augmented data”, estimate parameters using maximum likelihood Assume =0.5

  33. Discover Multiple Topics in a Collection Percentage of background words Coverage of topic  j in doc d Prob. of word w in topic  j ? Parameters: =(B, {d,j}, { j}) Can be estimated using ML Estimator ? Topic coverage in document d warning 0.3 system 0.2.. Topic 1 ? ? aid 0.1donation 0.05support 0.02 .. 1 “Generating” word w in doc d in the collection ? d,1 Topic 2 2 ? d,2 1 - B … ? statistics 0.2loss 0.1dead 0.05 .. d, k ? W k Topic k ? B ? is 0.05the 0.04a 0.03 .. ? B Background B

  34. Probabilistic Latent Semantic Analysis/Indexing (PLSA/PLSI) [Hofmann 99a, 99b] • Mix k multinomial distributions to generate a document • Each document has a potentially different set of mixing weights which captures the topic coverage • When generating words in a document, each word may be generated using a DIFFERENT multinomial distribution (this is in contrast with the document clustering model where, once a multinomial distribution is chosen, all the words in a document would be generated using the same multinomial distribution) • By fitting the model to text data, we can estimate (1) the topic coverage in each document, and (2) word distribution for each topic, thus achieving “topic mining”

  35. M-Step: Max. Likelihood Estimator based on “fractional counts” How to Estimate Multiple Topics?(Expectation Maximization) the 0.2 a 0.1 we 0.01 to 0.02 … E-Step: Predict topic labels using Bayes Rule Known Background p(w | B) Observed Words … text =? mining =? association =? word =? … Unknown topic model p(w|1)=? “Text mining” … Unknown topic model p(w|2)=? “informationretrieval” … information =? retrieval =? query =? document =? …

  36. E-Step: Word w in doc d is generated • from cluster j • from background Application of Bayes rule M-Step: Re-estimate • mixing weights • topic LM Fractional counts contributing to • using cluster j in generating d • generating w from cluster j Sum over all docs in the collection Parameter Estimation

  37. How the Algorithm Works c(w,d)(1 - p(zd,w = B))p(zd,w=j) πd1,1( P(θ1|d1) ) πd1,2( P(θ2|d1) ) Topic coverage c(w,d)p(zd,w = B) c(w, d) aid 7 price d1 5 Initial value 6 oil πd2,1( P(θ1|d2) ) πd2,2( P(θ2|d2) ) aid 8 d2 price 7 5 oil Initial value Topic 1 Topic 2 P(w| θ) Iteration 1: E Step: split word counts with different topics (by computing z’ s) Iteration 2: M Step: re-estimate πd, j and P(w| θj) by adding and normalizing the splitted word counts Initializing πd, j and P(w| θj) with random values Iteration 3, 4, 5, … Until converging Iteration 2: E Step: split word counts with different topics (by computing z’ s) Iteration 1: M Step: re-estimate πd, j and P(w| θj) by adding and normalizing the splitted word counts aid Initial value price oil

  38. PLSA with Prior Knowledge • Users have some domain knowledge in mind, e.g., • We expect to see “retrieval models” as a topic in IR literature • We want to see aspects such as “battery” and “memory” for opinions about a laptop • One topic should be fixed to model background words (infinitely strong prior!) • We can easily incorporate such knowledge as priors of PLSA model

  39. Adding Prior :Maximum a Posteriori (MAP) Estimation Most likely  Prior can be placed on  as well (more about this later) Topic coverage in document d warning 0.3 system 0.2.. Topic 1 d,1 1 “Generating” word w in doc d in the collection aid 0.1donation 0.05support 0.02 .. Topic 2 2 d,2 1 - B … d, k W k statistics 0.2loss 0.1dead 0.05 .. Topic k B B is 0.05the 0.04a 0.03 .. Parameters: B=noise-level (manually set) ’s and ’s are estimated with Maximum A Posteriori (MAP) Background B

  40. MAP Estimator Adding Prior as Pseudo Counts Observed Doc(s) the 0.2 a 0.1 we 0.01 to 0.02 … Known Background p(w | B) Suppose, we know the identity of each word ... … text =? mining =? association =? word =? … Unknown topic model p(w|1)=? “Text mining” Pseudo Doc … Unknown topic model p(w|2)=? “informationretrieval” … information =? retrieval =? query =? document =? … Size = μ text mining

  41. +p(w|’j) + Maximum A Posterior (MAP) Estimation Pseudo counts of w from prior ’ Sum of all pseudo counts What if =0? What if =+? A consequence of using conjugate prior is that the prior can be converted into “pseudo data” which can then be “merged” with the actual data for parameter estimation

  42. A General Introduction to EM Data: X (observed) + H(hidden) Parameter:  “Incomplete” likelihood: L( )= log p(X| ) “Complete” likelihood: Lc( )= log p(X,H| ) EM tries to iteratively maximize the incomplete likelihood: Starting with an initial guess (0), 1. E-step: compute the expectation of the complete likelihood 2. M-step: compute (n) by maximizing the Q-function

  43. Convergence Guarantee Goal: maximizing “Incomplete” likelihood: L( )= log p(X| ) I.e., choosing (n), so that L((n))-L((n-1))0 Note that, since p(X,H| ) =p(H|X, ) P(X| ) , L() =Lc() -log p(H|X, ) L((n))-L((n-1)) = Lc((n))-Lc( (n-1))+log [p(H|X,  (n-1) )/p(H|X, (n))] Taking expectation w.r.t. p(H|X, (n-1)), L((n))-L((n-1)) = Q((n);  (n-1))-Q( (n-1);  (n-1)) + D(p(H|X,  (n-1))||p(H|X,  (n))) Doesn’t contain H EM chooses (n) to maximize Q KL-divergence, always non-negative Therefore, L((n))  L((n-1))!

  44. EM as Hill-Climbing:converging to a local maximum Likelihood p(X| ) L()=L((n-1)) + Q(; (n-1)) -Q( (n-1);  (n-1) )+ D(p(H|X,  (n-1) )||p(H|X,  )) L((n-1)) + Q(; (n-1)) -Q( (n-1);  (n-1) ) next guess current guess Lower bound (Q function)  E-step = computing the lower bound M-step = maximizing the lower bound

  45. Outline • Background • Text Mining (TM) • Statistical Language Models • Basic Topic Models • Probabilistic Latent Semantic Analysis (PLSA) • Latent Dirichlet Allocation (LDA) • Applications of Basic Topic Models to Text Mining • Advanced Topic Models • Capturing Topic Structures • Contextualized Topic Models • Supervised Topic Models • Summary We are here

  46. Deficiency of PLSA • Not a generative model • Can’t compute probability of a new document • Heuristic workaround is possible, though • Many parameters  high complexity of models • Many local maxima • Prone to overfitting • Not necessary a problem for text mining (only interested in fitting the “training” documents)

  47. Latent Dirichlet Allocation (LDA) [Blei et al. 02] • Make PLSA a generative model by imposing a Dirichlet prior on the model parameters  • LDA = Bayesian version of PLSA • Parameters are regularized • Can achieve the same goal as PLSA for text mining purposes • Topic coverage and topic word distributions can be inferred using Bayesian inference

  48. LDA = Imposing Prior on PLSA PLSA: Topic coverage d,j is specific to the “training documents”, thus can’t be used to generate a new document Topic coverage in document d {d,j } are free for tuning “Generating” word w in doc d in the collection d,1 1 2 W d,2 LDA: Topic coverage distribution {d,j } for any document is sampled from a Dirichlet distribution, allowing for generating a new doc d, k k {d,j } are regularized Magnitudes of  and  determine the variances of the prior, thus also the strength of prior (larger  and  stronger prior) In addition, the topic word distributions {j } are also drawn from another Dirichlet prior

  49. Equations for PLSA vs. LDA PLSA Core assumption in all topic models PLSA component LDA Added by LDA

  50. Parameter Estimation & Inferences in LDA Parameter estimation can be done in the same say as in PLSA: Maximum Likelihood Estimator: However, must now be computed using posterior inference: Computationally intractable, must resort to approximate inference!

More Related