Disagreement in interpretation: a method for the development of benchmarks for quality assurance in imaging

J Am Coll Radiol. 2004 Mar;1(3):212-7. doi: 10.1016/j.jacr.2003.12.017.

Abstract

Purpose: To calculate disagreement rates by radiologist and modality to develop a benchmark for use in the quality assessment of imaging interpretation.

Methods: Data were obtained from double readings of 2% of daily cases performed for quality assurance (QA) between 1997 and 2001 by radiologists at a group practice in Dallas, Texas. Differences across radiologists in disagreement rates, with adjustments for case mix, were examined for statistical significance using simple comparisons of means and multivariate logistic regression.

Results: In 6703 cases read by 26 radiologists, the authors found an overall disagreement rate of 3.48%, with a disagreement rate of 3.03% for general radiology, 3.61% for diagnostic mammography, 5.79% for screening mammography, and 4.07% for ultrasound. Disagreement rates by radiologist for the 10 radiologists with at least 20 cases ranged from 2.04% to 6.90%. Multivariate analysis found that controlling for other factors, both differences among radiologists and across modalities, statistically significantly contributed to differences in disagreement rates.

Conclusion: Disagreement rates varied by modality and by radiologist. Double reading studies such as these are a useful tool to rate quality of imaging interpretation and to establish benchmarks for QA.

MeSH terms

  • Benchmarking*
  • Diagnostic Errors / statistics & numerical data*
  • Diagnostic Imaging / standards*
  • Group Practice / standards*
  • Health Care Surveys
  • Humans
  • Image Interpretation, Computer-Assisted / standards*
  • Quality Assurance, Health Care
  • Quality Indicators, Health Care*
  • Texas