TY - JOUR
T1 - Evaluating the measurement error of interviewer observed paradata
AU - Sinibaldi, Jennifer
AU - Durrant, Gabriele B.
AU - Kreuter, Frauke
N1 - Funding Information:
Jennifer Sinibaldi is a researcher and Gradab scholar at the institute for employment research (iab), nürnberg, Germany. Gabriele b. durrant is a senior lecturer at the Southampton Statistical Sciences research institute, university of Southampton, Southampton, england. frauke kreuter is an associate professor in the Joint Program in Survey Methodology at the university of Maryland, College Park, Md, uSa; director of the Statistical Methods research department (keM) at the institute for employment research (iab) in nürnberg, Germany; and Professor of Statistics at ludwig-Maximilians-universität in Munich, Germany. Part of the research was funded by the uk economic and Social research Council (eSrC), research grant “the use of Paradata in Cross-Sectional and longitudinal research” [reS-062-23-2997]. this work contains statistical data from the Office for national Statistics (OnS), which is a Crown copyright and reproduced with the permission of the controller of Her Majesty’s Stationery Office and Queen’s Printer for Scotland. the use of the OnS statistical data in this work does not imply the endorsement of the OnS in relation to the interpretation or analysis of the statistical data. this work uses research data sets that may not exactly reproduce national Statistics aggregates. the authors thank the institute for employment research (iab) for funding the research visit from Gabriele durrant to facilitate joint work on this paper. *address correspondence to Jennifer Sinibaldi, iab, regensburger Straße 104, 90478 nürnberg, Germany; e-mail: [email protected].
PY - 2013
Y1 - 2013
N2 - As survey researchers have begun exploiting paradata-for example, for the correction of nonresponse bias-the quality of these data has come into question. Inaccurate information is likely to affect the resulting statistics and conclusions drawn from such data. This paper focuses on one type of paradata, observations made by interviewers during the data-collection process, and assesses the quality of these observations by examining their measurement error properties. The analysis uses the UK Census Nonresponse Link Study, which links interviewers' observations collected on six major UK surveys with Census data. Comparing five interviewer observations with self-reports from the Census, the accuracy of the observations for both respondents and nonrespondents to the surveys is evaluated. A multilevel modeling approach is used to explore under which conditions the interviewers' observations match the reports on the Census forms, accounting for the clustering of sample members within interviewers and areas. The analysis finds that the overall percent agreement between the observations and the Census is generally high, ranging from 87 to 98 percent. The type of housing structure and the final result code are significantly associated with measurement error. For four of the five observations, there is evidence that the interviewer significantly influences the level of measurement error, even after controlling for household, interviewer, and area characteristics. The results presented here will inform future analyses assessing the quality of interviewers' observations.
AB - As survey researchers have begun exploiting paradata-for example, for the correction of nonresponse bias-the quality of these data has come into question. Inaccurate information is likely to affect the resulting statistics and conclusions drawn from such data. This paper focuses on one type of paradata, observations made by interviewers during the data-collection process, and assesses the quality of these observations by examining their measurement error properties. The analysis uses the UK Census Nonresponse Link Study, which links interviewers' observations collected on six major UK surveys with Census data. Comparing five interviewer observations with self-reports from the Census, the accuracy of the observations for both respondents and nonrespondents to the surveys is evaluated. A multilevel modeling approach is used to explore under which conditions the interviewers' observations match the reports on the Census forms, accounting for the clustering of sample members within interviewers and areas. The analysis finds that the overall percent agreement between the observations and the Census is generally high, ranging from 87 to 98 percent. The type of housing structure and the final result code are significantly associated with measurement error. For four of the five observations, there is evidence that the interviewer significantly influences the level of measurement error, even after controlling for household, interviewer, and area characteristics. The results presented here will inform future analyses assessing the quality of interviewers' observations.
UR - http://www.scopus.com/inward/record.url?scp=84875635220&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84875635220&partnerID=8YFLogxK
U2 - 10.1093/poq/nfs062
DO - 10.1093/poq/nfs062
M3 - Review article
AN - SCOPUS:84875635220
SN - 0033-362X
VL - 77
SP - 173
EP - 193
JO - Public Opinion Quarterly
JF - Public Opinion Quarterly
IS - S1
ER -