Announcement

Collapse
No announcement yet.
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Forced choice conjoint experiment

    Hi,

    I'm analyzing data of a binary discrete choice model where respondents need to select one of two alternatives (forced choice), and then select their level of satisfaction for each alternative (rating from 1-5). This exercise is repeated three times and so have six observations per respondent.

    Following the three-step process for multilevel logistic modeling (Sommet and Morselli, 2017), I first run an intercept-only model to determine dependence among responses for the same respondent (clustering). However, as you can see below, the coefficient is essentially equal to zero, and so is the random effect intercept variance.

    Code:
    melogit selected || participant_id:
    
    Mixed-effects logistic regression               Number of obs     =     13,710
    Group variable: participant_id                  Number of groups  =      2,285
    
                                                    Obs per group:
                                                                  min =          6
                                                                  avg =        6.0
                                                                  max =          6
    
    Integration method: mvaghermite                 Integration pts.  =          7
    
                                                    Wald chi2(0)      =          .
    Log likelihood = -9503.0478                     Prob > chi2       =          .
    --------------------------------------------------------------------------------
          selected | Coefficient  Std. err.      z    P>|z|     [95% conf. interval]
    ---------------+----------------------------------------------------------------
             _cons |  -2.89e-17   .0170809    -0.00   1.000     -.033478     .033478
    ---------------+----------------------------------------------------------------
    participant_id |
         var(_cons)|   2.84e-19   4.00e-08                             .           .
    --------------------------------------------------------------------------------
    LR test vs. logistic model: chibar2(01) = 0.00        Prob >= chibar2 = 1.0000
    When adding the attributes (five categorical variables) describing the alternatives to the model, I get comparable coefficients then when I'm using the conjoint command (OLS model) but again cannot carry out an likelihood ratio test. Unsurprisingly, the interclass correlation is close to zero in both models. However, when using the dichotomized rating variable (rating=0 if rating=1/3, 1 otherwise), I'm able to run both models, perform the LR test (see below) and I find significant intraclass correlation (0.20). The mean dichotomized rating is 0.63. Does that mean that multilevel logistic regression can't be used in a forced choice model context, when the outcome variable is essentially a coin toss (50% chance of either attribute being selected)? If so, what would be a more appropriate model? For both the forced choice and rating-based models, should I also include a task variable in the model, indicating which alternatives were compared when the respondent chose the most preferred option, and potential framing when rating each alternative?

    I also note that the three step processes recommends to demean both level-1 and level-2 predictors, though I haven't seen it being performed elsewhere. Any thoughts on this approach?

    Finally, the three step process also flags that when introducing interactions in the level, the coefficient of the product term is different from the interaction effect (Kolasinki & Siegel, 2010) but also that the marginal effect is also different from the interaction effect (Ai & Norton, 2003, Karaca-Mandic, Norton & Dowd, 2012). Any suggestions on how to perform heterogenous analysis using interactions that could be interpretable (or is margins the way to go)?

    Thank you very much for your help and apologies for the long list of questions

    Code:
    melogit selected i.attribute1 i.attribute2 i.attribute3 i.attribute4 i.attribute5 || participant_id:
    
    Mixed-effects logistic regression               Number of obs     =     13,698
    Group variable: participant_id                  Number of groups  =      2,285
    
                                                    Obs per group:
                                                                  min =          4
                                                                  avg =        6.0
                                                                  max =          6
    
    Integration method: mvaghermite                 Integration pts.  =          7
    
                                                    Wald chi2(10)     =     444.91
    Log likelihood = -8803.6294                     Prob > chi2       =     0.0000
     ( 1)  [rating2]_cons = 0
    --------------------------------------------------------------------------------
           rating2 | Coefficient  Std. err.      z    P>|z|     [95% conf. interval]
    ---------------+----------------------------------------------------------------
        attribute1 |
                2  |    -.08611    .046013    -1.87   0.061    -.1762938    .0040738
                3  |  -.2835425   .0460664    -6.16   0.000    -.3738309   -.1932541
                   |
        attribute2 |
                2  |   .3098807   .0461955     6.71   0.000     .2193392    .4004221
                3  |    .138299   .0454766     3.04   0.002     .0491665    .2274314
                   |
      2.attribute3 |   .1452584   .0382233     3.80   0.000     .0703421    .2201747
                   |
        attribute4 |
                2  |   .3871734   .0511703     7.57   0.000     .2868815    .4874653
                3  |   .3357501   .0511001     6.57   0.000     .2355958    .4359044
                4  |   .3689993   .0518696     7.11   0.000     .2673367     .470662
                   |
        attribute5 |
                2  |    .145906   .0453345     3.22   0.001      .057052    .2347601
                3  |   .1547728   .0457817     3.38   0.001     .0650424    .2445033
                   |
             _cons |          0  (omitted)
    ---------------+----------------------------------------------------------------
    participant_id |
         var(_cons)|   .8654685   .0653078                      .7464832    1.003419
    --------------------------------------------------------------------------------
    LR test vs. logistic model: chibar2(01) = 514.59      Prob >= chibar2 = 0.0000



  • #2
    Hi,

    I'd also like to include respondent-specific control variables, such as gender, age, martial status, but also geographical information (town, ward, not included as level 3 or 4 as data was collected in four wards in each of the three towns). My understanding is that such variables should be considered as level-2 predictors, as they vary between respondents, and therefore included after the colon:

    Code:
    melogit selected i.attribute1 i.attribute2 i.attribute3 i.attribute4 i.attribute5 || participant_id: i.town##i.ward gender log_age i.marital_status
    However, this doesn't seem always seem to be the case. For example, here, the doctor's experience is included as a level-1 predictor, despite the doctor ID being the highest (second) level unit size, while here, variable black is also added as a level-1 predictor when it only varies at the level-2 unit size.

    The reason I'm asking is because a simpler version of my version has been running for more than four hours and I'm still stuck at "Refining starting values" and may take weeks to run with additional level-2 predictors.

    Code:
    melogit selected i.attribute1 i.attribute2 i.attribute3 i.attribute4 i.attribute5 || participant_id: i.ward
    Given the importance of these level-2 predictors (and even more so for heterogenous analysis), I would be very grateful if someone could help me implement the correct approach.

    Comment


    • #3
      Cross-posted on Stack overflow

      Comment

      Working...
      X