150 likes | 263 Views
Different Decision Making Rules (can change the outcome). Frank Tsui 2012. Picking the “best” or the “worst”. Given a list of projects, or of employees, or of political candidates, or of students, or of professors, etc. , we are often asked to pick the “best” or the “worst” : e.g.
E N D
Different Decision Making Rules(can change the outcome) Frank Tsui 2012
Picking the “best” or the “worst” • Given a list of projects, or of employees, or of political candidates, or of students, or of professors, etc. , we are often asked to pick the “best” or the “worst” : e.g. • For “promotion-layoff” • For “award-punishment” • How might one go about doing this in some systematic way?
Possible Process • Consider the set of attributes one may think is important for evaluation. • Evaluate each alternative(e.g. employee) on the list with the chosen attributes. • Compare the evaluation results. • Make a decision based on the comparison. Yes, every step is hard and may present some problems. These are not just small problem ----- may cost you your job or your bank account (lawsuit) if you are wrong!
Consider an Example • You have a small “high tech” software company. • You have 3 employees (or partners): A, B, C. • After 2 years, you are trying to decide which one to keep and which one to remove to make the company more competitive. • So, you decide to look at 4 “important” characteristics that you felt are important to you Evaluation attributes may be decision maker dependent.
Picked on 4 Attributes • Cooperativeness (looking for good team player) • Quality of work (can not afford fix cost and bad rep) • Productivity (need to compete against off-shore) • Innovation (necessary for “world class” success) Seems reasonable --- now how would we measure these?
The “metrics” • Luckily, you have taken a course in software engineering at SPSU and knew about importance of measurement and kept some data on your projects and people. So ---- the following metrics are what you decided on: • Cooperativeness : number of team design meetings attended • Quality : number of customer found defects per kloc shipped • Productivity : number of kloc shipped per $-month • Innovation : number of patents per year Not perfect ---- but usable for evaluation of attributes
Different Attribute Metrics • Each attribute has its own metric and makes evaluation/comparison difficult. • Plus --- are all the attributes of equal importance? • - For simplicity, for now, assume all attributes are equal in weight. • Convert all attributes to a “single” measurement scale • ( use 1 - 10 for “worst to best” with even increment of 1)
Single & Uniform Scale (1 -10) 10 10 * * * 8 8 * * 6 6 * * * 4 4 * 2 2 1 2 3 4 20 40 60 80 100 Quality (defects/kloc) Cooperativeness (% mtg) 10 10 * * * * * 8 8 * 6 * 6 * * 4 4 2 2 1 2 3 4 1 2 3 4 5 Productivity (kloc/$-mo) Innovation (patens/yr)
Decision Rule 1: Ranking with 10-point Scale A>C>B Rule 1: rank by “total score” evaluation method we have the following ranking : A > C > B ----- A gets rewarded and possibly B punished ! Note: If we put different weights on attributes ---- we can also alter the decision
Decision Rule 2: Rankings with 10-point Scale C>B>A B>C>A A>B>C A>C>B Rule 2: Drop the one with most number of lowest-ranking ( punish “most” negativity) : A gets punished and both B and C are tied !
Decision Rule 3: Rankings 10-point Scale C>B>A B>C>A A>B>C A>C>B Rule 3: Reward the one with most number of highest-ranking ( reward “most” positivity) A gets rewarded and both B and C are tied
Decision Rule 4: Rankings 10-point Scale C>B>A B>C>A A>B>C A>C>B Rule 4: Reward the one with most pair-wise comparison wins (head-head wins) A~B : A>B twice and B>A twice A~ C: A>C twice and C>A twice B~C : B>C twice and C>B twice Everyone is same and tied !
One Obvious Observation ! • Data and “ranking” never changed ----- but by varying the Decision Rule, one can see that different conclusion may be arrived! • YOU, as the technical manager, should understand this!