Sorry - kids wanted dinner
I'll start from and repeat the beginning, makes more sense ...
Intraclass Correlation Coefficients (ICC) are hard to understand for us plain commoners, especially if the focus is not primarily on "classical" reliability,
\[ \text{Formula 1 - } \text{ICC(3,1) by Shrout or ICC(C,1) by McCraw = } {\sigma^2_\text{Subject} \over \sigma^2_\text{Subject} + \sigma^2_\text{Error}} \]
but rather on the "expected trial-to-trial noise in the data" as in a test-re-test setting = seeing how much the scores agree with one another when repeating trials.
In this case it is suggested (see this excellent paper) that one also includes systematic error in the denominator due to trials, so that
\[ \text{Formula 2 - } \text{ICC(2,1) by Shrout or ICC(A,1) by McCraw = } {\sigma^2_\text{Subject} \over \sigma^2_\text{Subject} + \sigma^2_\text{Trials} + \sigma^2_\text{Error}} \]
Moreover, one quickly gets pointed towards a concept called "agreement" or "absolute reliability" (vs. "relative reliability"); in any case, more often than not, this concept is termed "Standard Error of Measurement" (SEM; not to be confused with the Standard Error of the Mean).
We can calculate the SEM by different methods, but most involve the ICC or components thereof - such as variance components.
We can use the inbuilt Stata commands such as icc to get all the Intraclass Correlation Coefficients we want, with confidence intervals and all.
We can even use anova or better yet wsanova (net install sg103.pkg) to churn out mean squares and plug those into formulas like below by hand - just to see how things "really" work; its nice for didactics as well.
\[ {\sigma^2_\text{Subject} \over \sigma^2_\text{Subject} + \sigma^2_\text{Error}} = {\text{MS}_\text{Subject} - \text{MS}_\text{Error} \over \text{MS}_\text{Subject} + \text{(Number of trials - 1)} * \text{MS}_\text{Error}} \]
But at times one might want to use mixed to do the same job. Why? Because more flexible and can handle missings.
A lot of text for following short query. Using the data below I run.
and get
var(_cons) is exactly σ2Subject and var(Residual) is exactly σ2Error; with that information, plug it into Formula 1 above and you get an ICC(3,1) aka ICC(C,1) of 0.53772775.
Of course you could have just pressed postestimation estat icc.
But what if you want to calculate the ICC(2,1) aka ICC(A,1) - if you are more interested in test-re-test "reliability" or the SEM for agreement, which requires you to provide information on σ2Trial (see here). There you would need
[/QUOTE]
I'll start from and repeat the beginning, makes more sense ...
Intraclass Correlation Coefficients (ICC) are hard to understand for us plain commoners, especially if the focus is not primarily on "classical" reliability,
\[ \text{Formula 1 - } \text{ICC(3,1) by Shrout or ICC(C,1) by McCraw = } {\sigma^2_\text{Subject} \over \sigma^2_\text{Subject} + \sigma^2_\text{Error}} \]
but rather on the "expected trial-to-trial noise in the data" as in a test-re-test setting = seeing how much the scores agree with one another when repeating trials.
In this case it is suggested (see this excellent paper) that one also includes systematic error in the denominator due to trials, so that
\[ \text{Formula 2 - } \text{ICC(2,1) by Shrout or ICC(A,1) by McCraw = } {\sigma^2_\text{Subject} \over \sigma^2_\text{Subject} + \sigma^2_\text{Trials} + \sigma^2_\text{Error}} \]
Moreover, one quickly gets pointed towards a concept called "agreement" or "absolute reliability" (vs. "relative reliability"); in any case, more often than not, this concept is termed "Standard Error of Measurement" (SEM; not to be confused with the Standard Error of the Mean).
We can calculate the SEM by different methods, but most involve the ICC or components thereof - such as variance components.
- \[ {\text{Pooled standard deviation} * \sqrt{\text{1-ICC }}} \]
- \[ {\sqrt{\text{Error Mean Square} \text{ = MS}_\text{Error}}} \] or \[ {\sqrt{\sigma^2_\text{Error}}} \]
We can use the inbuilt Stata commands such as icc to get all the Intraclass Correlation Coefficients we want, with confidence intervals and all.
We can even use anova or better yet wsanova (net install sg103.pkg) to churn out mean squares and plug those into formulas like below by hand - just to see how things "really" work; its nice for didactics as well.
\[ {\sigma^2_\text{Subject} \over \sigma^2_\text{Subject} + \sigma^2_\text{Error}} = {\text{MS}_\text{Subject} - \text{MS}_\text{Error} \over \text{MS}_\text{Subject} + \text{(Number of trials - 1)} * \text{MS}_\text{Error}} \]
But at times one might want to use mixed to do the same job. Why? Because more flexible and can handle missings.
A lot of text for following short query. Using the data below I run.
Code:
mixed outcome trial_column || person_row : , reml var
Code:
[...] ------------------------------------------------------------------------------ Random-effects Parameters | Estimate Std. Err. [95% Conf. Interval] -----------------------------+------------------------------------------------ person_row: Identity | var(_cons) | 66.42859 53.0146 13.90072 317.448 -----------------------------+------------------------------------------------ var(Residual) | 57.10714 30.52504 20.03107 162.8084 ------------------------------------------------------------------------------
Of course you could have just pressed postestimation estat icc.
But what if you want to calculate the ICC(2,1) aka ICC(A,1) - if you are more interested in test-re-test "reliability" or the SEM for agreement, which requires you to provide information on σ2Trial (see here). There you would need
Code:
* Example generated by -dataex-. To install: ssc install dataex * Data from Table 1, Example data set Trial B - from excellent paper clear input int outcome byte(person_row _person_id trial_column) 166 1 1 1 168 2 2 1 160 3 3 1 150 4 4 1 147 5 5 1 146 6 6 1 156 7 7 1 155 8 8 1 160 1 1 2 172 2 2 2 142 3 3 2 159 4 4 2 135 5 5 2 143 6 6 2 147 7 7 2 168 8 8 2 end