Computer programs for the concordance correlation coefficient

https://doi.org/10.1016/j.cmpb.2007.07.003Get rights and content

Abstract

The CCC macro is presented for computation of the concordance correlation coefficient (CCC), a common measure of reproducibility. The macro has been produced in both SAS and R, and a detailed presentation of the macro input and output for the SAS program is included. The macro provides estimation of three versions of the CCC, as presented by Lin [L.I.-K. Lin, A concordance correlation coefficient to evaluate reproducibility, Biometrics 45 (1989) 255–268], Barnhart et al. [H.X. Barnhart, J.L. Haber, J.L. Song, Overall concordance correlation coefficient for evaluating agreement among multiple observers, Biometrics 58 (2002) 1020–1027], and Williamson et al. [J.M. Williamson, S.B. Crawford, H.M. Lin, Resampling dependent concordance correlation coefficients, J. Biopharm. Stat. 17 (2007) 685–696]. It also provides bootstrap confidence intervals for the CCC, as well as for the difference in CCCs for both independent and dependent samples. The macro is designed for balanced data only. Detailed explanation of the involved computations and macro variable definitions are provided in the text. Two biomedical examples are included to illustrate that the macro can be easily implemented.

Introduction

In the health sciences, it is often necessary to study the reproducibility of continuous measurements made using a certain diagnostic tool or method. As technology brings forth new tools and methods, we are interested in evaluating the consistency of evaluations made using the new method as well as comparing this measure to the current gold standard if one exists. The concordance correlation coefficient (CCC) provides a means for examining the reproducibility of continuous measurements made by multiple raters using a single method or by two or more raters using two methods. Several other reproducibility measures are available, such as the Pearson correlation coefficient, the intraclass correlation coefficient [4], [5], and the within-subject coefficient of variation [6]. In general, these measures do not address both precision and accuracy as does the concordance correlation coefficient; however, the equivalency and similarities of the intraclass correlation coefficient to the concordance correlation coefficient under certain scenarios has been discussed by Nickerson [7], Carrasco and Jover [8], and Barnhart et al. [9]. The CCC measures how far the fitted linear relationship of two variables X and Y deviates from the concordance line (accuracy) and how far each observation deviates from the fitted line (precision).

There are several forms of the concordance correlation coefficient (CCC). The CCC for two raters evaluating a single method was presented by Lin [1], [10]. Barnhart et al. [2] considered an overall CCC for multiple raters evaluating a single method, which is equivalent to the functions presented by Lin [1], [11] and King and Chinchilli [12]. Williamson et al. [3] examined the agreement between two methods for multiple raters. The estimation of these forms of the CCC can be conducted by estimating the means, variances, and covariances for the ratings. As an alternative to estimation by the method of moments, Carrasco and Jover [8] propose estimating the CCC using variance components from a mixed model. Several methods have been proposed for inference regarding the CCC. For two raters evaluating a single method, Lin [1] proposed an asymptotic approach for computing variance estimates. For the overall CCC for multiple raters evaluating a single method, King and Chinchilli [12] conducted inference using a U-statistics approach, while Barnhart et al. [2] explored both a GEE and bootstrap approach. Williamson et al. [3] explored permutation testing and the bootstrap for agreement between two methods for multiple raters.

Here we describe the CCC macro written in SAS [13] which is designed to estimate all three forms of the CCC. The macro also provides confidence intervals for these estimates as well as for the difference in two CCCs. Where applicable, an asymptotic confidence interval is computed for both the estimation of the CCC as well as for the estimation of the difference in CCCs [1]. Otherwise, bootstrap confidence intervals are computed with the CCC macro [2], [3], [14]. The CCC macro was also written in R with the same input and output [15]. General examples for the macro call in R are included in Section 3.3 detailing the required parameters and output for each analysis, but all of the practical examples are presented in SAS.

Section snippets

Estimation of concordance correlation coefficients

A major component of the CCC macro is to provide an estimate of the concordance correlation coefficient (CCC). The formula used to compute the CCC is dependent upon the number of raters and the number of methods specified by the user. When a single method is evaluated by two raters, the CCC proposed by Lin [1] is used:ρc=2σ12σ12+σ22+(μ1μ2)2,where μ1 and σ12 represent the mean and variance for the first rater, μ2 and σ22 represent the mean and variance for the second rater, and σ12 is the

Macro overview

The CCC macro is designed to perform a variety of analyses pertaining to the concordance correlation coefficient. The macro can provide estimates of the overall CCC along with confidence intervals, either asymptotic or bootstrap, for one method with multiple raters, or for two methods with multiple raters (i.e. an experimental method and a gold standard). It will also perform estimation of the difference in two CCCs under the assumptions of both independence and dependence, and calculate

Biochemical in vitro assays

In a study of biochemical in vitro assays, researchers were interested in the reproducibility of toxicity measurements made by two different assays: cellular adenosine triphosphate activity using cell line 76 (ATP-76) and cellular adhesion using cell line 74 (CLA-74) [1]. The percent cell function measured by each assay at two independent trials conducted 1 week apart was recorded for 10 materials of varying toxicity. We are interested in assessing the agreement of the measurements produced

Macro availability and run time

The CCC macro written in SAS [13], R [15], or S-PLUS [18] can be obtained by directly contacting the authors or by accessing the following websites: http://www.statisticaldisplays.org or http://www.personal.psu.edu/hxl28/research/CCCprogram. The SAS macro was written in v9, but because some of the IML functions available in SAS v9 are not available in SAS v8, a v8 macro also was created. This macro performs the same functions, but requires more computing time, and can be obtained following the

Acknowledgements

Sara Crawford's research is supported by an appointment to the Research Participation Program at the Centers for Disease Control and Prevention, National Center for Infectious Diseases, Division of Parasitic Diseases administered by the Oak Ridge Institute for Science and Education through an interagency agreement between the U.S. Department of Energy and CDC. The research of Andrzej Kosinski and Huiman Barnhart is supported by the National Institutes of Health Grant R01 MH70028. We thank two

References (18)

  • J. Lee et al.

    Statistical evaluation of agreement between two methods for measuring a quantitative variable

    Comput. Biol. Med.

    (1989)
  • L.I-K. Lin

    A concordance correlation coefficient to evaluate reproducibility

    Biometrics

    (1989)
  • H.X. Barnhart et al.

    Overall concordance correlation coefficient for evaluating agreement among multiple observers

    Biometrics

    (2002)
  • J.M. Williamson et al.

    Resampling dependent concordance correlation coefficients

    J. Biopharm. Stat.

    (2007)
  • J.L. Fleiss

    The Design Analysis of Clinical Experiments

    (1986)
  • H. Quan et al.

    Assessing reproducibility by the within-subject coefficient of variation with random effects models

    Biometrics

    (1996)
  • C.A.E. Nickerson

    A note on “a concordance correlation coefficient to evaluate reproducibility”

    Biometrics

    (1997)
  • J.L. Carrasco et al.

    Estimating the generalized concordance correlation coefficient through variance components

    Biometrics

    (2003)
  • H.X. Barnhart et al.

    An overview on assessing agreement with continuous measurement

    J. Biopharm. Stat.

    (2007)
There are more references available in the full text version of this article.

Cited by (61)

  • Development of a blood calcium test for hypocalcemia diagnosis in dairy cows

    2022, Research in Veterinary Science
    Citation Excerpt :

    Bland-Altman plots depict the difference between methods (1-step colorimetric assay and CalTreAx), in which good agreement is shown by values that lie close to the 0 mean difference line and between the 95% confidence interval limits of agreement (Bland and Altman, 2007). Lin's concordance correlation coefficients (CCC) were also included to determine agreement among results of the 1-step colorimetric assay and CalTreAx; the coefficient ranges from −1 to 1 with −1, 0 and 1 indicating perfect disagreement, independence and perfect agreement, respectively (Crawford et al., 2007). Contingency tables based on the tCa concentration categories were created to calculate a Cohen's Kappa (weighted) coefficient to indicate the agreement of subclinical hypocalcemia (1.37 to 2.12 mmol/L) and clinical hypocalcemic (< 1.37 mmol/L) diagnosis beyond chance between the 1-step colorimetric assay and CalTreAx.

  • Accuracy of a cow-side test for the diagnosis of hyperketonemia and hypoglycemia in lactating dairy cows

    2017, Research in Veterinary Science
    Citation Excerpt :

    Bland-Altman plots depict the difference between methods, in which good agreement is shown by values that lie close to the 0 mean difference line and between the 95% confidence interval limits of agreement (Bland and Altman, 2007). Lin's concordance correlation coefficients (CCC) were also included to determine agreement amongst results; the coefficient ranges from − 1 to 1 with − 1, 0 and 1 indicating perfect disagreement, independence and perfect agreement, respectively (Crawford et al., 2007). Contingency tables based on the BHBA and glucose concentration categories were created to calculate a Cohen's Kappa (weighted) coefficient to indicate the agreement of ketosis and hypoglycemia diagnosis beyond chance.

View all citing articles on Scopus
View full text