1 / 94

Michael A. Kohn, MD, MPP 10/29/2009

Chapter 7 – Prognostic Tests Chapter 8 – Combining Tests and Multivariable Decision Rules. Michael A. Kohn, MD, MPP 10/29/2009. Outline of Topics. Prognostic Tests Differences from diagnostic tests Quantifying prediction: calibration and discrimination Value of prognostic information

lynn
Download Presentation

Michael A. Kohn, MD, MPP 10/29/2009

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Chapter 7 – Prognostic Tests Chapter 8 – Combining Tests and Multivariable Decision Rules Michael A. Kohn, MD, MPP 10/29/2009

  2. Outline of Topics • Prognostic Tests • Differences from diagnostic tests • Quantifying prediction: calibration and discrimination • Value of prognostic information • Comparing predictions • Combining Tests/Diagnostic Models • Importance of test non-independence • Recursive Partitioning • Logistic Regression • Variable (Test) Selection • Importance of validation separate from derivation

  3. Prognostic Tests (Ch 7)* Differences from diagnostic tests Validation/Quantifying Accuracy (calibration and discrimination) Assessing the value of prognostic information Comparing predictions by different people or different models *Will not discuss time-to-event analysis or predicting continuous outcomes. (Covered in Chapter 7.)

  4. Chance determines whether you get the disease Spin the needle

  5. Diagnostic Test • Spin needle to see if you develop disease. • Perform test for disease. • Gold standard determines true disease state. (Can calculate sensitivity, specificity, LRs.)

  6. Prognostic Test • Perform test to predict the risk of disease. • Spin needle to see if you develop disease. • How do you assess the validity of the predictions?

  7. Example: Mastate Cancer Once developed, always fatal. Can be prevented by mastatectomy. Two oncologists separately assign each of N individuals a risk for developing mastate cancer in the next 5 years.

  8. How do you assess the validity of the predictions?

  9. How many like this? Oncologist 1 assigns risk of 50% Spin the needles. How many get mastate cancer?

  10. How many like this? Oncologist 1 assigns risk of 35% Spin the needles. How many get mastate cancer?

  11. How many like this? Oncologist 1 assigns risk of 20% Spin the needles. How many get mastate cancer?

  12. How accurate are the predicted probabilities? Break the population into groups Compare actual and predicted probabilities for each group Calibration* *Related to Goodness-of-Fit and diagnostic model validation, which will be discussed shortly.

  13. Calibration

  14. Calibration

  15. How well can the test separate subjects in the population from the mean probability to values closer to zero or 1? May be more generalizable Often measured with C-statistic (AUROC) Discrimination

  16. Discrimination

  17. Discrimination

  18. Discrimination AUROC = 0.63

  19. True Risk Oncologist 1: 20% Oncologist 2: 20% True Risk: 11.1% Oncologist 1: 35% Oncologist 2: 20% True Risk: 16.7% Oncologist 1: 50% Oncologist 2: 20% True Risk: 33.3%

  20. True Risk -- Calibration

  21. True Risk -- Calibration

  22. True Risk -- Discrimination

  23. True Risk -- Discrimination

  24. True Risk -- Discrimination AUROC = 0.63

  25. ROC curve depends only on rankings, not calibration

  26. Random event occurs AFTER prognostic test. 1) Perform test to predict the risk of disease. 2) Spin needle to see if you develop disease. Only crystal ball allows perfect prediction.

  27. Maximum AUROC True Risk: 11.1% True Risk: 16.7% True Risk: 33.3% Maximum AUROC = 0.65

  28. Diagnostic versus Prognostic Tests Identify Prevalent Disease Predict Incident Disease/Outcome Prior to Test After Test Cross-Sectional Cohort +/-, ordinal, continuous Risk (Probability) 1 <1 (not clairvoyant)

  29. Value of Prognostic Information Why do you want to know risk of mastate cancer? To decide whether to do a mastatectomy.

  30. Value of Prognostic Information • It is 4 times worse to die of mastate gland cancer than to have a mastatectomy. • B + C = 4C • Ptt = C/(B+C) = C/4C = 0.25 = 25%

  31. Value of Prognostic Information300 patients (100 per risk group) Oncologist 1: 31% > 25% Mastatectomy 89 out of 100 unnecessary; no mastate cancer deaths Oncologist 1: 37% > 25% Mastatectomy 83 out of 100 unnecessary; no mastate cancer deaths Oncologist 1: 53% > 25% Mastatectomy 67 out of 100 unnecessary; no mastate cancer deaths

  32. Value of Prognostic Information300 patients (100 per risk group) Oncologist 2: 20% < 25% No Mastatectomy 11 out of 100 die of mastate cancer; no mastatectomies Oncologist 2: 20% < 25% No Mastatectomy 17 out of 100 die; no mastatectomies Oncologist 2: 20% < 25% No Mastatectomy 33 out of 100 die; no mastatectomies

  33. Value of Prognostic Information300 patients (100 per risk group) True Risk: 11% < 25% No Mastatectomy 11 out of 100 die of mastate cancer; no mastatectomies True Risk: 17% < 25% No Mastatectomy 17 out of 100 die; no mastatectomies True Risk: 33% > 25% Mastatectomy 67 out of 100 unnecessary; no mastate cancer deaths

  34. Value of Prognostic Information300 patients (100 per risk group)

  35. Doctors and patients like prognostic information But hard to assess its value Most objective approach is decision-analytic. Consider: What decision is to be made? Costs of errors? Cost of test? Value of Prognostic Information

  36. Comparing Predictions • Compare ROC Curves and AUROCs • Reclassification Tables*, Net Reclassification Improvement (NRI), Integrated Discrimination Improvement (IDI) • See Jan. 30, 2008 Issue of Statistics in Medicine* (? and EBD Edition 2 ?) *Pencina et al. Stat Med. 2008 Jan 30;27(2):157-72;

  37. Common Problems with Studies of Prognostic Tests See Chapter 7

  38. Combining Tests/Diagnostic Models • Importance of test non-independence • Recursive Partitioning • Logistic Regression • Variable (Test) Selection • Importance of validation separate from derivation (calibration and discrimination revisited)

  39. Combining TestsExample Prenatal sonographic Nuchal Translucency (NT) and Nasal Bone Exam as dichotomous tests for Trisomy 21* *Cicero, S., G. Rembouskos, et al. (2004). "Likelihood ratio for trisomy 21 in fetuses with absent nasal bone at the 11-14-week scan." Ultrasound Obstet Gynecol23(3): 218-23.

  40. If NT ≥ 3.5 mm Positive for Trisomy 21* *What’s wrong with this definition?

  41. In general, don’t make multi-level tests like NT into dichotomous tests by choosing a fixed cutoff • I did it here to make the discussion of multiple tests easier • I arbitrarily chose to call ≥ 3.5 mm positive

  42. One Dichotomous Test Trisomy 21 Nuchal D+ D- LR Translucency ≥ 3.5 mm 212 478 7.0 < 3.5 mm 121 4745 0.4 Total 333 5223 Do you see that this is (212/333)/(478/5223)? Review of Chapter 3: What are the sensitivity, specificity, PPV, and NPV of this test? (Be careful.)

  43. Nuchal Translucency • Sensitivity = 212/333 = 64% • Specificity = 4745/5223 = 91% • Prevalence = 333/(333+5223) = 6% (Study population: pregnant women about to undergo CVS, so high prevalence of Trisomy 21) PPV = 212/(212 + 478) = 31% NPV = 4745/(121 + 4745) = 97.5%* * Not that great; prior to test P(D-) = 94%

  44. Clinical Scenario – One TestPre-Test Probability of Down’s = 6%NT Positive Pre-test prob: 0.06 Pre-test odds: 0.06/0.94 = 0.064 LR(+) = 7.0 Post-Test Odds = Pre-Test Odds x LR(+) = 0.064 x 7.0 = 0.44 Post-Test prob = 0.44/(0.44 + 1) = 0.31

  45. NT Positive • Pre-test Prob = 0.06 • P(Result|Trisomy 21) = 0.64 • P(Result|No Trisomy 21) = 0.09 • Post-Test Prob = ? http://www.quesgen.com/Calculators/PostProdOfDisease/PostProdOfDisease.html Slide Rule

  46. Nasal Bone Seen NBA=“No” Neg for Trisomy 21 Nasal Bone Absent NBA=“Yes” Pos for Trisomy 21

  47. Second Dichotomous Test Nasal Bone Tri21+ Tri21- LR Absent Yes 229 129 27.8 No 104 5094 0.32 Total 333 5223 Do you see that this is (229/333)/(129/5223)?

  48. Pre-Test Probability of Trisomy 21 = 6%NT Positive for Trisomy 21 (≥ 3.5 mm)Post-NT Probability of Trisomy 21 = 31%NBA Positive (no bone seen)Post-NBA Probability of Trisomy 21 = ? Clinical Scenario –Two Tests Using Probabilities

More Related