I made a comment on this issue yesterday in another thread about -xtmixed-. Since you are keeping this thread active, I thought I'd edit some of that content.
Basically I speculated on why adjusted R-squared is not used for random effect models, and suggested using the information criterion indicators like AIC or BIC instead.
My reasoning is this:
Adjusted R-Squared in a OLS context basically corrects for the number of predictors. Below is the formula from the wiki page.

For a random effect model, the predictors are separated into the "fixed" and "random" components. The former comes from the ordinary regressors in the model while the latter comes from the variances (or covariances if they do exist) of the random effect terms. Confounding these two sources into the one metric p is debatable, and no one seems to have done it for the R squared (actually people even disagreed on how to calculate the R squared for random effects models).
The AIC and BIC indicators suffer from this same issue, but at least there is generally agreed upon convention in how to calculate them (which is, take the addition of the two components).
Now if we were to insist using this same logic to calculate the adjusted R-squared for -xtreg,re-, the p in the above formula would simply be the number of ordinary regressors plus one (which comes from the sigma_u in the output of xtreg). ereturn list should give you the rest of the information needed to calculate this adjusted R-squared -- the number of ordinary regressors in e(rank), the n in e(N), and the R-squared overall in e(r2_o).
*These are just developing thoughts. Feel free to correct me if something does not make sense.
Basically I speculated on why adjusted R-squared is not used for random effect models, and suggested using the information criterion indicators like AIC or BIC instead.
My reasoning is this:
Adjusted R-Squared in a OLS context basically corrects for the number of predictors. Below is the formula from the wiki page.
For a random effect model, the predictors are separated into the "fixed" and "random" components. The former comes from the ordinary regressors in the model while the latter comes from the variances (or covariances if they do exist) of the random effect terms. Confounding these two sources into the one metric p is debatable, and no one seems to have done it for the R squared (actually people even disagreed on how to calculate the R squared for random effects models).
The AIC and BIC indicators suffer from this same issue, but at least there is generally agreed upon convention in how to calculate them (which is, take the addition of the two components).
Now if we were to insist using this same logic to calculate the adjusted R-squared for -xtreg,re-, the p in the above formula would simply be the number of ordinary regressors plus one (which comes from the sigma_u in the output of xtreg). ereturn list should give you the rest of the information needed to calculate this adjusted R-squared -- the number of ordinary regressors in e(rank), the n in e(N), and the R-squared overall in e(r2_o).
*These are just developing thoughts. Feel free to correct me if something does not make sense.
Comment