Hi everyone,
I am trying to plot cut point coefficients from a graded response model along with standard errors. I understand that Stata fits the model using gsem, and I don't have any trouble switching from the slope-intercept parameterization (used by gsem) to the IRT parameterization for the coefficients, but I cannot figure out how to reproduce the irt-parameterized standard errors that Stata reports. I could not find the formula in the gsem or grm documentation. Can anyone point me in the right direction?
To give a concrete example, the discrimination, first two cut point coefficients, and standard errors for the first item are below:
Dividing the first cut point gsem-coefficient by the discrimination parameter (-7.555/2.431) produces the grm-coefficient, but 0.201/2.431=0.083, not 0.080. For other cut points, the results of dividing the gsem standard error by the discrimination parameter is much less close to the reported grm standard error, so I don't believe this is simply a matter of rounding. How does Stata re-parameterize the gsem standard errors for the irt parameterization?
I am trying to plot cut point coefficients from a graded response model along with standard errors. I understand that Stata fits the model using gsem, and I don't have any trouble switching from the slope-intercept parameterization (used by gsem) to the IRT parameterization for the coefficients, but I cannot figure out how to reproduce the irt-parameterized standard errors that Stata reports. I could not find the formula in the gsem or grm documentation. Can anyone point me in the right direction?
To give a concrete example, the discrimination, first two cut point coefficients, and standard errors for the first item are below:
gsem-coef | gsem-se | grm-coef | grm-se | |
discrimination | 2.431 | 0.071 | 2.431 | 0.071 |
cut point1 | -7.555 | 0.201 | -3.108 | 0.080 |
cut point2 | -6.937 | 0.172 | -2.853 | 0.067 |
Comment