TY - JOUR
T1 - Comparison of appropriateness ratings for cataract surgery between convened and mail-only multidisciplinary panels
AU - Tobacman, Joanne K.
AU - Scott, Ingrid U.
AU - Cyphert, Stacey T.
AU - Zimmerman, M. Bridget
PY - 2001
Y1 - 2001
N2 - Background. In this article, the authors determine the reproducibility of appropriateness ratings for cataract surgery between a multidisciplinary physician panel that convened and a multidisciplinary physician panel that completed ratings by mail. Methods. Eighteen panelists, who constituted 2 distinct multidisciplinary panels, rated 2894 clinical scenarios as an appropriate, inappropriate, or uncertain indication to perform cataract surgery. Each panel's summary score for each scenario was calculated. Weighted kappa values were determined to assess the level of agreement between the ratings of the 2 panels. Results. The panels had a substantial level of agreement overall, with a weighted kappa statistic of 0.64. There was agreement on about 68% of the scenarios, and serious disagreement, in which one panel rated an indication appropriate and the other rated it inappropriate, occurred in only 1% of the ratings. Conclusion. There was substantial agreement about the ratings by the 2 panels. The panel that convened rated fewer scenarios uncertain and more appropriate, suggesting the impact of group dynamics and face-to-face discussion on resolution of uncertainty.
AB - Background. In this article, the authors determine the reproducibility of appropriateness ratings for cataract surgery between a multidisciplinary physician panel that convened and a multidisciplinary physician panel that completed ratings by mail. Methods. Eighteen panelists, who constituted 2 distinct multidisciplinary panels, rated 2894 clinical scenarios as an appropriate, inappropriate, or uncertain indication to perform cataract surgery. Each panel's summary score for each scenario was calculated. Weighted kappa values were determined to assess the level of agreement between the ratings of the 2 panels. Results. The panels had a substantial level of agreement overall, with a weighted kappa statistic of 0.64. There was agreement on about 68% of the scenarios, and serious disagreement, in which one panel rated an indication appropriate and the other rated it inappropriate, occurred in only 1% of the ratings. Conclusion. There was substantial agreement about the ratings by the 2 panels. The panel that convened rated fewer scenarios uncertain and more appropriate, suggesting the impact of group dynamics and face-to-face discussion on resolution of uncertainty.
UR - http://www.scopus.com/inward/record.url?scp=85047695645&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85047695645&partnerID=8YFLogxK
U2 - 10.1177/0272989X0102100607
DO - 10.1177/0272989X0102100607
M3 - Article
C2 - 11760106
AN - SCOPUS:85047695645
SN - 0272-989X
VL - 21
SP - 490
EP - 497
JO - Medical Decision Making
JF - Medical Decision Making
IS - 6
ER -