1 / 166

Stats 241.3

Stats 241.3. Probability Theory Summary. Probability. Axioms of Probability A probability measure P is defined on S by defining for each event E , P [ E ] with the following properties. P [ E ] ≥ 0 , for each E. P [ S ] = 1. Finite uniform probability space.

Download Presentation

Stats 241.3

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Stats 241.3 Probability Theory Summary

  2. Probability

  3. Axioms of Probability A probability measureP is defined on S by defining for each event E, P[E] with the following properties • P[E] ≥ 0, for each E. • P[S] = 1.

  4. Finite uniform probability space Many examples fall into this category • Finite number of outcomes • All outcomes are equally likely To handle problems in case we have to be able to count. Count n(E) and n(S).

  5. Techniques for counting

  6. Basic Rule of counting Suppose we carry out k operations in sequence Let n1 = the number of ways the first operation can be performed ni = the number of ways the ith operation can be performed once the first (i - 1) operations have been completed. i = 2, 3, … , k Then N = n1n2 … nk= the number of ways the k operations can be performed in sequence.

  7. Basic Counting Formulae • Permutations: How many ways can you order n objects n! • Permutations of size k (< n): How many ways can you choose k objects from n objects in a specific order

  8. Combinations of size k ( ≤n): A combination of size k chosen from n objects is a subset of size k where the order of selection is irrelevant. How many ways can you choose a combination of size k objects from n objects (order of selection is irrelevant)

  9. Important Notes • In combinations ordering is irrelevant. Different orderings result in the same combination. • In permutations order is relevant. Different orderings result in the different permutations.

  10. Rules of Probability

  11. The additive rule P[A B] = P[A] + P[B] – P[A  B] and if P[A  B] = f P[A B] = P[A] + P[B]

  12. The additive rule for more than two events and if Ai Aj = f for all i ≠ j. then

  13. for any event E The Rule for complements

  14. Conditional Probability,Independence andThe Multiplicative Rule

  15. The conditional probability of A given B is defined to be:

  16. The multiplicative rule of probability and if A and B areindependent. This is the definition of independence

  17. The multiplicative rule for more than two events

  18. Independencefor more than 2 events

  19. Definition: The set of k events A1, A2, … , Akare called mutually independent if: P[Ai1∩ Ai2∩… ∩ Aim] = P[Ai1] P[Ai2] …P[Aim] For every subset {i1, i2, … , im } of {1, 2, …, k } i.e.for k = 3 A1, A2, … , Akare mutually independentif: P[A1∩ A2] = P[A1] P[A2], P[A1∩ A3] = P[A1] P[A3], P[A2∩ A3] = P[A2] P[A3], P[A1∩ A2∩ A3] = P[A1] P[A2] P[A3]

  20. Definition: The set of k events A1, A2, … , Akare called pairwise independent if: P[Ai∩ Aj] = P[Ai] P[Aj] for all i and j. i.e.for k = 3 A1, A2, … , Akare pairwise independentif: P[A1∩ A2] = P[A1] P[A2], P[A1∩ A3] = P[A1] P[A3], P[A2∩ A3] = P[A2] P[A3], It is not necessarily true that P[A1∩ A2∩ A3] = P[A1] P[A2] P[A3]

  21. Bayes Rule for probability

  22. An generalization of Bayes Rule Let A1, A2 , …, Ak denote a set of events such that for all i and j. Then

  23. Random Variables an important concept in probability

  24. A random variable , X, is a numerical quantity whose value is determined be a random experiment

  25. Definition – The probability function, p(x), of a random variable, X. For any random variable, X, and any real number, x, we define where {X = x} = the set of all outcomes (event) with X = x. For continuous random variables p(x) = 0 for all values of x.

  26. Definition – The cumulative distribution function, F(x), of a random variable, X. For any random variable, X, and any real number, x, we define where {X≤x} = the set of all outcomes (event) with X ≤x.

  27. Discrete Random Variables For a discrete random variable X the probability distribution is described by the probability function p(x), which has the following properties

  28. Graph: Discrete Random Variable p(x) b a

  29. Continuousrandom variables For a continuous random variable X the probability distribution is described by the probability density function f(x), which has the following properties : • f(x) ≥ 0

  30. Graph: Continuous Random Variableprobability density function, f(x)

  31. The distribution function F(x) This is defined for any random variable, X. F(x) = P[X ≤ x] Properties • F(-∞) = 0 and F(∞) = 1. • F(x) is non-decreasing (i. e. if x1 < x2 then F(x1) ≤F(x2) ) • F(b) – F(a) = P[a < X ≤ b].

  32. p(x) = P[X = x] =F(x) – F(x-) Here • If p(x) = 0 for all x (i.e. X is continuous) then F(x) is continuous.

  33. For Discrete Random Variables F(x) is a non-decreasing step function with F(x) p(x)

  34. f(x) slope F(x) x • For Continuous Random Variables Variables F(x) is a non-decreasing continuous function with To find the probability density function, f(x), one first finds F(x) then

  35. Some Important Discrete distributions

  36. The Bernoulli distribution

  37. Success (S) • Failure (F) Suppose that we have a experiment that has two outcomes These terms are used in reliability testing. Suppose that p is the probability of success (S) and q = 1 – p is the probability of failure (F) This experiment is sometimes called a Bernoulli Trial Let Then

  38. The probability distribution with probability function is called the Bernoulli distribution p q = 1- p

  39. The Binomial distribution

  40. We observe a Bernoulli trial (S,F)n times. Let X denote the number of successes in the n trials. Then X has a binomial distribution, i. e. where • p = the probability of success (S), and • q = 1 – p = the probability of failure (F)

  41. The Poisson distribution • Suppose events are occurring randomly and uniformly in time. • Let X be the number of events occuring in a fixed period of time. Then X will have a Poisson distribution with parameter l.

  42. The Geometric distribution Suppose a Bernoulli trial (S,F) is repeated until a success occurs. X = the trial on which the first success (S) occurs. The probability function of X is: p(x) =P[X = x] = (1 – p)x – 1p = p qx - 1

  43. The Negative Binomial distribution Suppose a Bernoulli trial (S,F) is repeated until k successes occur. Let X = the trial on which the kth success (S) occurs. The probability function of X is:

  44. The Hypergeometric distribution Suppose we have a population containing N objects. Suppose the elements of the population are partitioned into two groups. Let a = the number of elements in group A and let b = the number of elements in the other group (group B). Note N = a+ b. Now suppose that n elements are selected from the population at random. Let X denote the elements from group A. The probability distribution of X is

  45. Continuous Distributions

  46. Continuousrandom variables For a continuous random variable X the probability distribution is described by the probability density function f(x), which has the following properties : • f(x) ≥ 0

  47. Graph: Continuous Random Variableprobability density function, f(x)

  48. The Uniform distribution from a to b Continuous Distributions

  49. The Normal distribution (mean m, standard deviation s) s m

More Related