1 / 38

Problems With Decision Criteria

Problems With Decision Criteria. Transparencies for chapter 2. The Payoff Matrix.

Download Presentation

Problems With Decision Criteria

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Problems With Decision Criteria • Transparencies for chapter 2

  2. The Payoff Matrix • The Simplest structure for a decision model consist of a set of possible course of actions, a list of possible outcomes that could occur and straight forward evaluation of each decision outcome pairs. Formulated as follows:

  3. Payoff matrix • aj : Course of action j • i : Outcome variable • yij : The value for the decision maker if taking action aj and i occurs • a1 a2 … aj ….. am • 1 • 2 • i • n yij

  4. Preptown Book Store • The manger of the bookstore at the Preptown college must decide how many copies to order of the book thought thinking to order for the course Creative Thinking. The maximum enrollment is 70. So far 50 are enrolled and this could go up or down. The book will make $ 15 on every sold book. The course will not repeat and any book unsold book will be disposed at $ 5 loss. The manger must decide how much to order.

  5. Payoff Matrix • Considering only orders in units of 10 • 0 10 20 30 40 50 60 70 • 0 0 -50 -100 -150 –200 –250 –300 -350 • 10 0 150 100 50 0 -50 -100 -150 • 20 0 150 300 250 200 150 100 50 • 30 0 150 300 450 400 350 300250 • 40 0 150 300 450 600 550 500 450 • 50 0 150 300 450 600 750 700 650 • 60 0 150 300 450 600 750 900 850 • 70 0 150 300 450 600 750 900 950

  6. Non-stochastic Criteria • Outcome dominance • Option aj dominates ak if and only if • yij yik for all I • and yij > yik for at least one i • This criterion is useful for eliminating options that are inferior. It reduces number of options and problem complexity.

  7. Example for Outcome Dominance • Consider the following payoff matrix • a1 a2 a3 • 1 6 3 8 • 2 5 4 2 • 3 7 6 3 • a1 dominates a2

  8. Maximin Criterion • Action aq is optimal in maximin criterion if and only if for each aj there exist a yij* which is the minimum yij over all and yiq is the maximum of all yij*

  9. Example for Maximin Criterion • Examine the payoff matrix for the bookstore problem. • The minimums for the course of actions for a1 through a8 are: • 0 -50 -100 -150 -200 -250 -300 -350 • The maximum of these minimums is 0 and therefore the optimal course of action is a1.

  10. Problem with Maxmin • Looking at the worst scenarios can be misleading as in the following matrix (Conservative criterion) • a1 a2 • 1 31 32 • 2 10,000 33 The optimal under maxmin is a2 which is misleading

  11. Maxmax Criterion • Action aq is optimal under maxmax criterion if and only if there exist p such that ypq yij for all i and j.

  12. Example for Maxmax Criterion • Consider the preptown store . The maximax criterion selects a8. • Maximax criterion is risky and can result in huge losses.

  13. Example for Maxima • This example demonstrates that maximax is risky. a1 a21 9 10 2 8 -50,000 Maximax will select a2

  14. Minimax Regret Criterion • This criterion advocated by Savage • Regret rij = Max (yij) – yij • j a1 a2 Regret matrix 1 8 9 1 0 2 12 10 0 2

  15. Minimax Regret • aq is optimal in minimax regret sense iff rq* rj* for all j where rj* = max rij

  16. Minmax Regret • To apply minimax regret perform the following steps: • Compute the regret matrix • For each aj compute the maximum regret • Select aj with the minimum maximum regret.

  17. Minimax regret • Minimax regret violates the coherence principle. The following example demonstrate that. a1 a2 Regret matrix 1 8 2 0 6 2 0 4 4 0 • a1 is the optimal using minmax regret

  18. Minmax and Coherence ( cont.) Payoff Matrix Regret Matrix a1 a2 a3 a1 a2 a3 1 8 2 1 0 6 7 2 0 4 7 7 3 0 a2 is optimal when a3 isadded. This is rank reversal. Any criterion that reverses the rank of alternatives is incoherent.

  19. Stochastic Criteria • The previous criteria do not take into account the relative chances of the occurrence of the outcomes • Use the concept of probability. How to get the probability?

  20. Probability Distributions • Review Probability • Expected Value • Mode • Variance • Possible application

  21. Model Outcome • aq is optimal in modal outcome sense iff exist p such that : P(p)  P(i) for all i and ypq  ypj for all j

  22. Modal Outcome • This criterion look at the most likely outcome and then select the course of action that has the maximum payoff with respect to this outcome.  p( ) a1 a2 a3 1 18 15 19 20 22 19 40 30 20 0,2 2 0.7 3 0.3

  23. Modal Outcome • The following counter example shows that the model criterion could be misleading  p( ) a1 a2 a3 1 0,24 0 99 98 0 99 99 21 20 20 2 0.25 3 0.51

  24. Modal Outcome • Linley presented an example to show modal outcome is incoherent.  p( ) a1 a2 1 5 3 5 3 8 9 2/9 2 3/9 3 4/9

  25. Modal Outcome • Linley presented an example to show modal outcome is incoherent. Justify?  p( ) a1 a2 1 & 2 5 3 8 9 5/9 3 4/9

  26. ExpectedValue • EV(aj) = yijP(i ) • Action aq is optimal in expected value sense iff : E(aq) ≥ E(aj) for all j i

  27. Expected Value Example •  p( ) a1 a2 a3 1 0.2 18 15 19 20 22 19 40 30 20 2 0.7 3 0.1 EV(Aj) 21.6 21.4 19.1 a1 is the optimal in expected value sense

  28. ExpectedRegret • Compute the regret matrix • Compute expected regret • Select the option that minimizes expected regret. • The solution that maximizes expected value is the same as the one that minimizes expected regret. Proof on page 29 text.

  29. Expected Regret Example •  p( ) a1 a2 a3 1 0.2 1 4 0 2 0 3 0 10 20 2 0.7 3 0.1 ER(aj) 1.6 1.8 4.1 a1 is the optimal in minimizing expected regret

  30. Modal Outcome • This criterion look at the most likely outcome and the select the course of action that has the maximum payoff with respect to this outcome.  p( ) a1 a2 a3 1 18 15 19 20 22 19 40 30 20 0,2 2 0.7 3 0.3

  31. Payoff Distribution Analysis • Payoff matrix versus payoff distribution • For each action there is a payoff for each state of nature (outcome) . Let us denote the payoff vector for aj by Yj and its probability distribution by pj( ). The payoff vector with its probability distribution is called the payoff distribution

  32. Payoff Formulation of DP • The DP problem can be formulated as follows: • Given a set of payoff distributions Yj (Options), select the best payoff distribution among them. Best, in what sense? • Expected value sense, medial payoff sense, modal payoff sense, etc.

  33. Modal Payoff Versus Modal Outcome •  p( ) a1 a2 1 0.3 10 20 10 20 15 10 2 0.3 3 0.4 This example shows modal payoff not the same as modal outcome. How see page 31 Bunn book “ Applied Decision Analysis”

  34. Risk Analysis • Risk is the Likelihood of greater losses. It is the probability of undesirable event occurring. • In financial analysis risk is taken to be the dispersion of payoff distribution. (variance). • In insurance industry risk refers to the maximum amount of money which can lost under a particular policy. It is referred to as the expected value of a detritus proposal.

  35. Measures of Risk • Variance : focuses on dispersion • Semi-variance: Focuses on the largest possible values greater than a certain value c. = =

  36. Measures of Risk • Critical probability: Same as semi-variance but risk is measured in terms of probability. P( y≤ c ) = = F(c) • Why we need all these measures?

  37. Generalization of Risk Measures • Fishburn generalizes the last two measures of risks by making the power . If  = 2 it becomes the semi-variance. If  = 0 it becomes the critical value and so on. The use of critical value is more applicable in practice.

  38. Mean -Variance Dominance • Option aj dominates option ai iff • E(aj)  E(ai) and V(aj) ≤ V(ai) with one of them an inequality. • Option j E(aj) V(aj) • 1 7 1 • 2 8 2 • 3 9 2 • 4 7 1.5 • 5 10 3 Option 3 dominates options 2, option 1 dominates option 4. The efficient set: ES = { Options 1, 3 and 5}

More Related