150 likes | 275 Views
6F5Z1003 Research Design and Analysis (Ed) Lecture 04 sensitivity, predictive value and repeatability. Today. Sensitivity and Specificity Predictive values Measuring observer reliability What are they?; Why do we need them?; How do we do them?. Sensitivity and Specificity.
E N D
6F5Z1003 Research Design and Analysis (Ed) Lecture 04 sensitivity, predictive value and repeatability
Today Sensitivity and Specificity Predictive values Measuring observer reliability What are they?; Why do we need them?; How do we do them?
Sensitivity and Specificity Sensitivity analysis is primarily used in medical diagnosis (but there are other applications) A typical use is to compare the efficiency of diagnosis techniques in the ability to detect a disease E.g., comparing a new diagnosis tool with the established Gold Standard diagnosis technique
Sensitivity and Specificity Data summary:
Sensitivity and Specificity From the previous data one can calculate: Sensitivity = a / (a+c) -the proportion of positive diagnoses when the condition is present Specificity = d / (b+d) -the proportion of negative diagnoses when the condition is absent
Predictive Values Predictive values are used as a measure of how accurate diagnostic tests are Positive Predictive Value (PPV) – This is the proportion of those with a negative test result who DO have the condition Negative predictive value (NPV) – This is the proportion of those with a negative test who DO NOT have the condition
Predictive Values Data summary:
Predictive Values From the previous data one can calculate: Positive predictive value (PPV) = a / (a + b) -the proportion of positive diagnoses who actually have the condition Negative predictive value (NPV) = d / (c + d) -the proportion of negative diagnoses who actually no not have the condition
Measuring observer reliability Observer reliability is generally used to control error, e.g., when more than one person is collecting data and it is important that they measure the same thing! There are a few way to do this – one is called Cohen’s Kappa Sometime called “repeatability” but there are several different ways to calculate
Measuring observer reliability Data summary: NB the raw data are categorical – count data here
Measuring observer reliability Note that there were 20 proposals that were granted by both reader A and reader B, and 15 proposals that were rejected by both readers. Thus, the observed percentage agreement is Pr(a) = (20 + 15) / 50 = 0.70
Measuring observer reliability Reader A said "Yes" to 25 applicants and "No" to 25 applicants. Thus reader A said "Yes" 50% of the time. Reader B said "Yes" to 30 applicants and "No" to 20 applicants. Thus reader B said "Yes" 60% of the time. Therefore the probability that both of them would say "Yes" randomly is 0.50 · 0.60 = 0.30 and the probability that both of them would say "No" is 0.50 · 0.40 = 0.20. Thus the overall probability of random agreement is Pr(e) = 0.3 + 0.2 = 0.5.
Measuring observer reliability Pr(a) = (20 + 15) / 50 = 0.70 Pr(e) = 0.3 + 0.2 = 0.5