Greetings, In some programs (e.g., Mplus), the common factors model can be extended to categorical, binary, and even count indicators of an underlying latent variable. However, generalized linear modeling approach can also accommodate such a mixture of items without presuming an underlying latent variable using maximum likelihood. The latter can be implemented in Stata's gsem, for example. And in some cases not evoking an underlying latent variable makes sense. The former directly fits a probability model with the observed/measured variables using a data likelihood function as opposed to the multi-step approach in the common factors model (e.g., computing a correlation matrix S* and then fitting the model to S* using a summary statistic fit function; cf. Curran-Hancock). My question is how the generalized linear modeling approach estimates parameters for a latent variable (e.g., mean, variance) as, again, it does not presume an underlying latent variable or construct. How is it producing these? Thanks, Saul
-
Login or Register
- Log in with
Comment