Hi. I appreciate the Stata Item Response Theory (IRT) commands. The documentation is quite good on a topic I find difficult. Please, if it interests you, I would be glad for advice. In part, I have a specific question, and in part I really am just trying to make sure I understand the normalization correctly in IRT 1pl, because I find it a little easier than gsem in cases where both are correct.
The Stata Rasch model documentation explains how to estimate the Rasch model with gsem or IRT 1pl: https://www.stata.com/support/faqs/s...s/rasch-model/ .
In my example code for this question, I use the De Boeck and Wilson (2004) data from this Stata help page: https://www.stata.com/manuals/irtirt...#irtirt,group() .
Question 1:
In my example, the gsem in Model 1 and the IRT in Model 2 are supposed to give the same estimates, but with a slightly different normalization. In gsem, the discrimination coefficient =1 as in the Rasch model. With IRT, we get the same log-likelihood and the coefficients essentially agree (if we multiply discrim*coef from the IRT model we corroborate the coefficient from gsem). Question: but why do the standard errors and z-scores disagree in Model 1 and Model 2?
Question 2:
In my example, Model 3 was my attempt to get IRT 1pl to have the same normalization as Model 1, so I would not need to do any multiplication to see that they agree. I am surprised that Model 3 gives different results. I was expecting it to agree with Model 1.
Much thanks!
Parke
Code:
use https://www.stata-press.com/data/r18/masc2
** Model 1: gsem Rasch model **
gsem (Latvar -> (q1-q5)@1), logit nocapslatent latent(Latvar)
** Model 2: IRT 1pl model **
irt 1pl q1-q5
** Model 3: IRT model with coef=1 as in Rasch **
irt 1pl q1-q5, cns(a@1)
The Stata Rasch model documentation explains how to estimate the Rasch model with gsem or IRT 1pl: https://www.stata.com/support/faqs/s...s/rasch-model/ .
In my example code for this question, I use the De Boeck and Wilson (2004) data from this Stata help page: https://www.stata.com/manuals/irtirt...#irtirt,group() .
Question 1:
In my example, the gsem in Model 1 and the IRT in Model 2 are supposed to give the same estimates, but with a slightly different normalization. In gsem, the discrimination coefficient =1 as in the Rasch model. With IRT, we get the same log-likelihood and the coefficients essentially agree (if we multiply discrim*coef from the IRT model we corroborate the coefficient from gsem). Question: but why do the standard errors and z-scores disagree in Model 1 and Model 2?
Question 2:
In my example, Model 3 was my attempt to get IRT 1pl to have the same normalization as Model 1, so I would not need to do any multiplication to see that they agree. I am surprised that Model 3 gives different results. I was expecting it to agree with Model 1.
Much thanks!
Parke
Code:
use https://www.stata-press.com/data/r18/masc2
** Model 1: gsem Rasch model **
gsem (Latvar -> (q1-q5)@1), logit nocapslatent latent(Latvar)
** Model 2: IRT 1pl model **
irt 1pl q1-q5
** Model 3: IRT model with coef=1 as in Rasch **
irt 1pl q1-q5, cns(a@1)