150 likes | 297 Views
Comparator Selection in Observational Comparative Effectiveness Research. Prepared for: Agency for Healthcare Research and Quality (AHRQ) www.ahrq.gov. Outline of Material. This presentation will:
E N D
Comparator Selection in Observational Comparative Effectiveness Research Prepared for: Agency for Healthcare Research and Quality (AHRQ) www.ahrq.gov
Outline of Material This presentation will: • Show how to choose concurrent, active comparators from the same source population (or justify the use of no-treatment comparisons/ historical comparators/different data sources) • Discuss potential bias (and methods to minimize it) associated with comparator choice • Define time 0 for all comparator groups in describing planned analyses
Introduction • In comparative effectiveness research, the choice of comparator directly affects the clinical implication, interpretation, and validity of study results. • Treatment decisions are based on factors associated with underlying disease and its severity, general health status or frailty, quality of life, and patient preferences. • There is potential for confounding by indication or severity and selection bias associated with different comparison groups. • Internal validity relies on defining appropriate dose, intensity of treatment, and exposure window for comparator groups.
Consequences of Comparator Choice (1 of 2) • Confounding arises when a risk factor for the study outcome of interest directly or indirectly affects exposure (e.g., treatment assignment). • The magnitude of potential confounding is generally expected to be smaller when the comparator: • Has the same indication • Has similar contraindications • Shares the same treatment modality (e.g., tablet or capsule) • Conduct sensitivity analyses to quantify effects of potential unmeasured confounding.
Consequences of Comparator Choice (2 of 2) • Exposure misclassification: • Arises when exposure measurement differs between the exposure and comparator groups • Is often more complex in comparative effectiveness research, since each group represents active treatment (nonuse of exposure treatment does not imply use of the comparator treatment) • Can differ in each group, especially if different treatment modalities are used • Assess separately for exposure versus comparison groups
Spectrum of Possible Comparisons (1 of 3) • Alternative treatments • Most common scenario and typically least biased • More clinically meaningful and methodologically valid • Could still result in confounding by severity if not adequately controlled through design/analysis • No treatment/testing • Absence of exposure or absence of exposure and use of an unrelated treatment (active comparator) • Choice of time 0 must be clinically appropriate in order to reduce bias
Spectrum of Possible Comparisons (2 of 3) • Usual or standard care • Develop a valid operational definition for care and for time at initiation (none, single, or a set of treatment/testing modalities) • Real-world use must be understood for proper definition • Can vary across geographic regions/treatment settings or change over time; avoid a “wastebasket definition” • Historical comparison • Used with a dramatic shift from one treatment to another • May be the only choice with strong selection for a new treatment that is uncontrollable and randomization is unethical/not realistic • Vulnerable to confounding by indication/severity when this information is unmeasured (overcome by instrumental variable analysis using calendar time)
Spectrum of Possible Comparisons (3 of 3) • Comparison groups from different data sources • Multiple data sources can be linked to enhance the validity of observational comparative effectiveness studies • Residual confounding might occur due to: • Incomparability of information in exposure and comparison groups • Differences in observed and unobserved domains as they are sampled differently or different source populations • Issues with generalizability when exposure and comparison groups come from different databases
Operationalizing the Comparison Group in Comparative Effectiveness Research (1 of 2) • Indication • Another treatment used for the same indication as the exposure treatment typically is used as the comparison group • Treatments approved for multiple indications—appropriate indication will have to be ensured by defining the indication and restricting the study population • Initiation • New-user design prevents underascertainment of early events and avoids selection bias arising from prevalent users • Inclusion of prevalent users may be justified when outcomes are rare or occur after long periods of use
Operationalizing the Comparison Group in Comparative Effectiveness Research (2 of 2) • Exposure time window • Period where therapeutic benefit and/or risk would plausibly occur • Sensitivity analysis to assess whether results are sensitive to different specifications of the exposure window(s) • Nonadherence • May differ between treatment and comparators • Treatment effects should be compared at adherence levels observed in clinical practice, rather than adjusting for the difference in adherence • Dose/intensity of drug comparison • Assess and report dose in each group • Make comparisons at clinically equivalent dose levels
Considerations for Comparisons AcrossDifferent Treatment Modalities (1 of 3) • Confounding by indication or severity: • Medications may be used for patients with a milder disease, and surgery might be reserved for those with more severe disease. • Selection of healthier patients to receive more invasive treatments: • Sicker patients are less likely to be considered for invasive procedures. • Selection becomes more problematic in comparisons across different treatment modalities.
Considerations for Comparisons AcrossDifferent Treatment Modalities (2 of 3) • Time from disease onset to a treatment: • Pay careful attention to the time from initial diagnosis and the general sequence of different treatment modalities needed to prevent immortal person-time bias. • Different magnitude of misclassification in drug exposure versus procedure comparison: • Misclassification of exposure might be greater with drugs than with devices/procedures. • Pharmacy records do not provide information on actual intake.
Considerations for Comparisons AcrossDifferent Treatment Modalities (3 of 3) • Provider effects in using devices or surgeries: • Consider the characteristics of the operating physician and institution where the device implantation or surgery was carried out • Be aware of the documented direct relationship between the level of physician experience and better patient outcomes for complex procedures • Adherence to drugs and device failure or removal: • Requires assumptions in most data sources • May be appropriate to compare without adjusting, as it reflects real-world use
Conclusions • Understanding the impact of comparator choice on study design is important. • Selection of the comparator group should be primarily driven by a comparative effectiveness question prioritized by the stakeholder community. • An over-riding consideration is the generation of evidence that should directly inform decisions on treatments, testing, or health care–delivery systems. • Some study questions may not be answered validly due to intractable bias in observational comparative effectiveness research.