1 / 24

Review

Review. Markov Logic Networks Mathew Richardson Pedro Domingos. Xinran (Sean) Luo , u0866707. O verview. Markov Networks First-order Logic Markov Logic Networks Inference Learning Experiments. Markov Networks. Also known as Markov random fields. Composed of

jada
Download Presentation

Review

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Review Markov Logic Networks Mathew Richardson Pedro Domingos Xinran(Sean) Luo, u0866707

  2. Overview • Markov Networks • First-order Logic • Markov Logic Networks • Inference • Learning • Experiments

  3. Markov Networks • Also known as Markov random fields. • Composed of • An undirected graph G • A set of potential function φk • Function: • And x{k} is the state of kth clique. Z is partition function:

  4. Markov Networks • Log-linear models: each clique potential function is replaced by an exponentiated weighted sum of features of the state:

  5. Overview • Markov Networks • First-order Logic • Markov Logic Networks • Inference • Learning • Experiments

  6. First-order Logic • A set of sentences or formulas in first-order logic. • Constructed by the symbols: connective, quanitfier, constants, variables, functions, predicates, etc.

  7. Syntax for First-Order Logic • Connective → ∨ | ∧ | ⇒ | ⇔ • Quanitfier → ∃ | ∀ • Constant → A | John | Car1 • Variable → x | y | z |... • Predicate → Brother | Owns | ... • Function → father-of | plus | ...

  8. Overview • Markov Networks • First-order Logic • Markov Logic Networks • Inference • Learning • Experiments

  9. Markov Logic Networks • A Markov Logic Network (MLN) L is a set of pairs (Fi, wi)where • Fiis a formula in first-order logic • wiis a real number

  10. Features of Markov Logic Network • It defines a Markov network ML,C with: • For each possible grounding of each predicate in L, there is a binary node in ML,C. If the ground atom is true, the node is 1. Otherwise, 0. • For each possible grounding of each formula in L, there is a feature node in ML,C. If the ground formula is true, the feature is 1. Otherwise, 0.

  11. Ground Term • A ground term is a term containing no variables. • Ground Markov Network: MLNs have certain regularities in structure and parameters. • MLN is templatefor ground Markov networks

  12. Example of an MLN Suppose we have two constants: Anna (A) and Bob (B) Smokes(A) Smokes(B) Cancer(A) Cancer(B)

  13. Example of an MLN Suppose we have two constants: Anna (A) and Bob (B) Friends(A,B) Friends(A,A) Friends(B,B) Friends(B,A)

  14. Example of an MLN Suppose we have two constants: Anna (A) and Bob (B) Friends(A,B) Friends(A,A) Smokes(A) Smokes(B) Friends(B,B) Cancer(A) Cancer(B) Friends(B,A)

  15. MLNs and First-Order Logic • First-order KB  assign a weight to each formula  MLN. • Satisfiable KB + positive weights to each formula  MLN represents a uniform distribution over the worlds. • MLN produce useful results even contains contradictions.

  16. Overview • Markov Networks • First-order Logic • Markov Logic Networks • Inference • Learning • Experiments

  17. Inference • Already know the probability of formula F1, what is the probability of F2? • Two steps (Approximate): • Find the minimal subset of the ground network. • (MCMC-Gibbs algorithm) Sampling one ground atom given its Markov blanket (the set of ground atoms that appear in some grounding of a formula with it).

  18. Inference • The probability of a ground atom Xl when its Markov blanket Bl is in state bl is: • is the value of 0 or 1.

  19. Overview • Markov Networks • First-order Logic • Markov Logic Networks • Inference • Learning • Experiments

  20. Learning • Data is from a relational database • Strategy: • Counting the number of true groundings of formula in DB. • Use Pseudo-Likelihood to get gradient. is the number of true groundings of the ith formula when we force Xl =0 and leave the remaining data unchanged, and similarly for

  21. Overview • Markov Networks • First-order Logic • Markov Logic Networks • Inference • Learning • Experiments

  22. Experiments • Hand-built knowledge base (KB) • ILP: CLAUDIEN • Markov logic networks (MLNs) • Using KB • Using CLAUDIEN • Using KB + CLAUDIEN • Bayesian network learner • Naïve Bayes

  23. Results

  24. Summary • Markov logic networks combine first-order logic and Markov networks • Syntax: First-order logic + Positive Weights • Semantics: Templates for Markov networks • Inference: Minimal subset + Gibbs • Learning:Pseudo-likelihood

More Related