The Concordance Correlation Coefficient (CCC) was introduced by Lin (1989) to measure agreement between two datasets. It is useful for assessing reproducibility in data collection, particularly where there is a degree of subjectivity involved, as it allows you to quantitatively compare the results for two sets of observations of the same thing (e.g., two different people counting microplastics using a hot needle test).
CCC is different to a simple correlation analysis (e.g., Pearson’s, Spearman’s Rank) because it accounts for both precision (i.e., how well the variables follow a linear trend) and accuracy (i.e., whether values in one dataset are systematically higher or lower than another). CCC would therefore be able to pick up issues such as bias in the magnitude one of the datasets, which would be missed in a traditional correlation analysis. In this sense, CCC provides a more rounded measure of agreement between the datasets, whereas correlation only measures the strength a linear relationship.
This tool is intended to provide a simple interface that allows students to calculate the Concordance Correlation Coefficient (CCC) with a 95% confidence interval (CI) for two sets of paired data (dataset X and dataset Y).
Data should be paired (i.e., each item in the first dataset should match that in the second). The easiest ting to do is organise your data in Excel then paste the columns into the boxes below (one column per box).
The CCC ranges from -1 to 1, where:
The closer the CCC is to 1, the stronger the agreement between datasets. If you are using this to test a method, then you should consider:
Note that the results text (above) suggests common wording to interpret the strength of the relationship, the but the distinction between these terms (weak, strong, very string etc.) are rather arbitrary and the context of the results should be considered.
Powered by Simple Statistics
© 2025 Prof. Jonny Huck