1 / 20

Correcting for Common Causes of Nonresponse and Measurement Error

This workshop discusses the common causes and correlates of nonresponse and measurement error in surveys, and explores efficient ways to address both errors. Results from a national face-to-face survey are presented.

Download Presentation

Correcting for Common Causes of Nonresponse and Measurement Error

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Correcting for Common Causes of Nonresponse and Measurement Error Andy Peytchev International Total Survey Error Workshop Stowe, June 15, 2010

  2. Correcting for Common Causes of Nonresponse and Measurement Error Andy Peytchev International Total Survey Error Workshop Stowe, June 15, 2010

  3. Outline • Background • Three objectives • Common causes and correlates of nonresponse and measurement error • Magnitude of each source for a sensitive behavior • Addressing both errors efficiently • Results from a national face to face survey • Summary and discussion

  4. 1. Relationship between Survey Errors • Topic can affect unit nonresponse • Topic interest or involvement (e.g., charity donations) • Topic sensitivity (e.g., sexual behaviors) • Topic can also affect measurement error • Socially desirable behaviors (e.g., voting) • Sensitive behaviors (e.g., abortion experiences) • Possible common causes and correlates

  5. Model Y* Y Topic Threat ε P Peytchev, Peytcheva, and Groves, 2008

  6. Model Y* Y Topic Threat ε P

  7. 2. Magnitude of Survey Errors • Relative magnitude of each source of survey error is important in reducing total survey error • Optimization of survey design to address the dominant source of error in key estimates • Disproportionate allocation of resources to address that source

  8. 3. Correcting for Nonresponse and Measurement Error • Corrections for measurement error typically not done • Unit nonresponse commonly addressed through weighting • Imputation offers advantages, including: • Use of incomplete covariate data (not evaluated here) • More efficient estimates that do not unduly increase variances, and can even decrease them

  9. Data • 1990 National Election Studies (not presented) • 1995 National Survey of Family Growth cycle 5 • Rich sampling frame to produce estimates of likelihood of nonresponse and nonresponse bias • Sample member demographic characteristics (NHIS) • Health related substantive variables (NHIS) • Paradata (NHIS) • Interviewer characteristics (NSFG) • Paradata (NSFG) • Replicate measures with less measurement error from topic sensitivity to produce estimates of measurement error • Abortion reports in CAPI and ACASI • Self reported comfort with providing truthful responses to interviewer

  10. Approach – Common Correlates • Logistic regression models using frame information to predict: • Noninterview (Unit nonresponse) • Lifetime abortion reports in CAPI (Nonresponse bias) • Underreporting in CAPI using ACASI (Measurement error) • Logistic regression of underreporting on frame variables and self reports of level of comfort with interviewers

  11. Approach – Relative Magnitude • Imputation for unit nonresponse and measurement error. Multiply-impute for entire eligible sample: • Correlates (mostly item nonresponse) • Abortion reports in CAPI (unit nonresponse) • Abortion reports in ACASI (unit nonresponse and measurement error) • Repeat using imputation only for item nonresponse and weighting for both unit nonresponse and measurement error • Evaluate relative change in proportions • Evaluate relative change in variance estimates

  12. Results – Common Correlates • Paradata (phone number, proxy) • predictive only of unit NR • Demographic characteristics • Hispanics and Blacks less likely to be NRs (and different on survey reports) but more likely to provide ME • Those with higher education less likely to be NRs • Older sample members more likely to be NRs, to report an abortion, and to provide ME • Health variables from NHIS • Only one of five variables was significantly related to ME • Weight associated with NR and NR bias

  13. Results – Common Correlates cont. • Interviewer characteristics • Those with higher education more likely to complete an interview • Hispanic origin related to less measurement error • Prior interviewing experience not related to either error source • More religious interviewers more likely to complete an interview (but not with ME) • Strongest correlates of ME found only among respondents • Respondents who answered being more comfortable with headphones than those for whom it did not matter were more than 4 times more likely to provide ME in CAPI responses • Those who answered that they were very likely to give different answers to other survey questions were more than 6 times more likely to provide ME in CAPI abortion reports

  14. Relationship between NR and ME, and Diagnostics • Response propensity quintiles among respondents 1 2 3 4 5 3.8% 4.3% 3.7% 3.6% 3.5% • Simulated nonresponse among respondents (refusal, more than 10 call attempts) Nonrespondents Respondents 3.5% 3.8% www.rti.org

  15. Results – Relative Bias

  16. Results – Standard Errors

  17. Results – Relative Bias with Ever Refused and 10+ Call Attempts

  18. Model Y* Y Topic Threat ε P

  19. Summary • Lack of direct association between NR and ME • Correlates? (fewer process variables in PUF) • Survey protocol? • Conditional nonresponse? (NSFG cycle 5) • Some common correlates of NR and ME, consistent with prior findings on abortion experiences based on topic threat • Many non-common correlates • ME found to be the dominant source of error compared to unit NR

  20. Discussion Questions • What about surveys without rich sampling frames? • What kind of sensitivity analyses can be performed when comparing magnitude of errors? That is, poor estimation on an error source may erroneously lead to concluding that another error is dominant. • Could findings be combined from multiple studies in order to better identify magnitudes of errors?

More Related