Announcement

Collapse
No announcement yet.
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • IRT Graded Response Model (testing out reducing number of response options)

    Hello! I am a longtime Stata user, but new to IRT analysis. I have a question about the IRT graded response model (GRM). I have heard from colleagues that IRT can be used to test whether or not it would make sense statistically to reduce the number of response options in a questionnaire. (This could be useful to ease respondent burden.)

    For example one might be interested in reducing a 5 category Likert scale to a 3 category Likert scale. However, in reading the Stata help, and in searching online, I have been unable to find any clear references that describe how to do this. Below please find a hypothetical setup:

    Code:
    use https://www.stata-press.com/data/r18/charity // demonstration data from Stata documentation
    
    tabulate ta1 // tabulate first item
    
    irt grm ta1-ta5 // IRT graded response model
    
    irtgraph icc ta1 // category characteristic curve for first item
    Is there a way within IRT to test, for example, whether an item, like ta1, that has 4 response options, could have fewer response options, e.g. 3 or 2? I imagine this would involve testing whether there are statistically significant differences between different difficulty parameters in the GRM, but can't quite figure out if this is the right direction, and if so, how specifically to accomplish this.

    Thank you for any thoughts or suggestions, or suggestions of references to look into.

    Andy

  • #2
    I tend to work more in the CTT framework (item factor analysis) than IRT, but even so, I am not aware of any systematic way of testing what you propose with already existing data. You could run simulations, of course, to look at the issue.

    In the real world, people often make these decisions about collapsing down from 5 to 4 or 3 categories based on data sparsity. For example, if only 0.5% of your sample chose response option 1 in a 5-item scale, then you might consider recoding them to response option 2. Models for categorical outcomes, whether IRT or regression-based, typically do not behave well when you have very sparse data in a given category.

    Comment


    • #3
      Thank you! I appreciate your thoughtful response, and will think more on this.

      Comment

      Working...
      X