1 / 23

Data Mining with Naïve Bayesian Methods

Data Mining with Naïve Bayesian Methods. Instructor: Qiang Yang Hong Kong University of Science and Technology Qyang@cs.ust.hk Thanks: Dan Weld, Eibe Frank. How to Predict?. From a new day’s data we wish to predict the decision Applications Text analysis Spam Email Classification

Download Presentation

Data Mining with Naïve Bayesian Methods

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Data Mining with Naïve Bayesian Methods Instructor: Qiang Yang Hong Kong University of Science and Technology Qyang@cs.ust.hk Thanks: Dan Weld, Eibe Frank

  2. How to Predict? • From a new day’s data we wish to predict the decision • Applications • Text analysis • Spam Email Classification • Gene analysis • Network Intrusion Detection

  3. Naïve Bayesian Models • Two assumptions: Attributes are • equally important • statistically independent (given the class value) • This means that knowledge about the value of a particular attribute doesn’t tell us anything about the value of another attribute (if the class is known) • Although based on assumptions that are almost never correct, this scheme works well in practice!

  4. Why Naïve? • Assume the attributes are independent, given class • What does that mean? play outlook temp humidity windy Pr(outlook=sunny | windy=true, play=yes)= Pr(outlook=sunny|play=yes)

  5. Weather data set

  6. Is the assumption satisfied? • #yes=9 • #sunny=2 • #windy, yes=3 • #sunny|windy, yes=1 Pr(outlook=sunny|windy=true, play=yes)=1/3 Pr(outlook=sunny|play=yes)=2/9 Pr(windy|outlook=sunny,play=yes)=1/2 Pr(windy|play=yes)=3/9 Thus, the assumption is NOT satisfied. But, we can tolerate some errors (see later slides)

  7. Probabilities for the weather data • A new day:

  8. Bayes’ rule • Probability of event H given evidence E: • A priori probability of H: • Probability of event before evidence has been seen • A posteriori probability of H: • Probability of event after evidence has been seen

  9. Naïve Bayes for classification • Classification learning: what’s the probability of the class given an instance? • Evidence E = an instance • Event H = class value for instance (Play=yes, Play=no) • Naïve Bayes Assumption: evidence can be split into independent parts (i.e. attributes of instance are independent)

  10. The weather data example Evidence E Probability for class “yes”

  11. The “zero-frequency problem” • What if an attribute value doesn’t occur with every class value (e.g. “Humidity = high” for class “yes”)? • Probability will be zero! • A posteriori probability will also be zero! (No matter how likely the other values are!) • Remedy: add 1 to the count for every attribute value-class combination (Laplace estimator) • Result: probabilities will never be zero! (also: stabilizes probability estimates)

  12. Modified probability estimates • In some cases adding a constant different from 1 might be more appropriate • Example: attribute outlook for class yes • Weights don’t need to be equal (if they sum to 1) Sunny Overcast Rainy

  13. Missing values • Training: instance is not included in frequency count for attribute value-class combination • Classification: attribute will be omitted from calculation • Example:

  14. Dealing with numeric attributes • Usual assumption: attributes have a normal or Gaussian probability distribution (given the class) • The probability density function for the normal distribution is defined by two parameters: • The sample mean : • The standard deviation : • The density function f(x):

  15. Statistics for the weather data • Example density value:

  16. Classifying a new day • A new day: • Missing values during training: not included in calculation of mean and standard deviation

  17. Probability densities • Relationship between probability and density: • But: this doesn’t change calculation of a posteriori probabilities because  cancels out • Exact relationship:

  18. Example of Naïve Bayes in Weka • Use Weka Naïve Bayes Module to classify • Weather.nominal.arff

  19. Discussion of Naïve Bayes • Naïve Bayes works surprisingly well (even if independence assumption is clearly violated) • Why? Because classification doesn’t require accurate probability estimates as long as maximum probability is assigned to correct class • However: adding too many redundant attributes will cause problems (e.g. identical attributes) • Note also: many numeric attributes are not normally distributed

More Related