Announcement

Collapse
No announcement yet.
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Convergence issues using mixlogitwtp

    Dear Statalisters,

    I’m conducting a choice experiment with five attributes, were two attributes have four continuous levels, and two attributes have two levels included as a dummy, and one attribute had three levels where I recoded this into three dummies and using one level in my model. I also added an attribute for price ranging in four levels. I run the mixlogitwtp command by A.R. Hole to estimate my model in WTP space. I received a error message: Warning: Convergence not achieved.

    I started by declaring my data to be panel data, since each individual (id) are given seven choice questions with two alternatives:

    CODE:

    cmset id question alternatives

    OUTPUT:

    Panel data: Panels id and time question
    Case ID variable: _caseid
    Alternatives variable: alternatives
    Panel by alternatives variable: _panelaltid (strongly balanced)
    Time variable: fråga, 1 to 7
    Delta: 1 unit

    Note: Data have been xtset.


    Then generated negprice= (-1)*price, to be able to corporate it into the log normal distribution.

    CODE:

    list id _caseid choice x1 x2 x3 x4 x5 negprice if id==9

    OUTPUT
    id _caseid choice x1 x2 x3 x4 x5 negprice
    9 57 0 260 0 1 0 12 -90
    9 57 1 330 1 0 0 28 -25
    9 58 0 130 0 0 0 12 -130
    9 58 1 130 1 1 0 7 -25
    9 59 0 260 1 1 1 18 -65
    9 59 1 330 0 0 0 12 -65
    9 60 0 260 1 0 1 7 -25
    9 60 1 260 0 0 0 28 -90
    9 61 0 560 0 1 0 28 -25
    9 61 1 560 0 0 1 28 -90
    9 62 0 130 1 0 0 7 -90
    9 62 1 130 0 1 1 12 -65
    9 63 0 330 0 0 0 28 -65
    9 63 1 330 1 1 0 18 -130
    CODE:

    mixlogitwtp choice, group(_caseid) price(negprice) id(id) rand( x1 x2 x3 x4 x5) nrep(1000) burn(100) technique(bhhh) iterate(1000)
    Iteration 0: log likelihood = -669.98139
    Iteration 1: log likelihood = -666.86419 (backed up)
    Iteration 2: log likelihood = -662.24822
    Iteration 3: log likelihood = -657.09689
    Iteration 4: log likelihood = -656.78398
    Iteration 5: log likelihood = -655.64698
    Iteration 6: log likelihood = -654.85472
    Iteration 7: log likelihood = -653.50478
    Iteration 8: log likelihood = -653.3682
    Iteration 9: log likelihood = -653.30287
    Iteration 10: log likelihood = -653.28295
    Iteration 11: log likelihood = -653.24267
    Iteration 12: log likelihood = -653.22792
    Iteration 13: log likelihood = -653.21128
    Iteration 14: log likelihood = -653.19447
    Iteration 15: log likelihood = -653.18469
    Iteration 16: log likelihood = -653.17412
    Iteration 17: log likelihood = -653.17223
    Iteration 18: log likelihood = -653.15394
    Iteration 19: log likelihood = -653.15157
    Iteration 20: log likelihood = -653.11819
    Iteration 21: log likelihood = -653.11664
    Iteration 22: log likelihood = -653.09851
    Iteration 23: log likelihood = -653.0953
    Iteration 24: log likelihood = -653.05923
    Iteration 25: log likelihood = -653.04855
    Iteration 26: log likelihood = -653.04038
    Iteration 27: log likelihood = -653.03079
    Iteration 28: log likelihood = -653.02477
    Iteration 29: log likelihood = -653.01427
    Iteration 30: log likelihood = -653.00891
    Iteration 31: log likelihood = -652.99678
    Iteration 32: log likelihood = -652.99036
    Iteration 33: log likelihood = -652.97525
    Iteration 34: log likelihood = -652.96686
    Iteration 35: log likelihood = -652.96144
    Iteration 36: log likelihood = -652.96081
    Iteration 37: log likelihood = -652.64558
    Iteration 38: log likelihood = -652.59462
    Iteration 39: log likelihood = -652.55809
    Iteration 40: log likelihood = -652.54935
    Iteration 41: log likelihood = -652.53877
    Iteration 42: log likelihood = -652.53268
    Iteration 43: log likelihood = -652.52968
    Iteration 44: log likelihood = -652.52941
    Iteration 45: log likelihood = -652.52768 (backed up)
    Iteration 46: log likelihood = -652.48455
    Iteration 47: log likelihood = -652.47941
    Iteration 48: log likelihood = -652.47597
    Iteration 49: log likelihood = -652.4758 (backed up)
    Iteration 50: log likelihood = -652.47578 (backed up)
    Iteration 51: log likelihood = -652.47576 (backed up)
    Iteration 52: log likelihood = -652.47576 (backed up)
    Iteration 53: log likelihood = -652.47532
    Iteration 54: log likelihood = -652.47507 (backed up)
    Iteration 55: log likelihood = -652.47437 (backed up)
    Iteration 56: log likelihood = -652.47418 (backed up)
    Iteration 57: log likelihood = -652.47411 (backed up)
    Iteration 58: log likelihood = -652.47409 (backed up)
    Iteration 59: log likelihood = -652.47408 (backed up)
    Iteration 60: log likelihood = -652.47408 (backed up)
    Iteration 61: log likelihood = -652.47408 (backed up)
    ,.....,
    Iteration 1000: log likelihood = -652.47408 (backed up)
    convergence not achieved
    Mixed logit model in WTP space Number of obs = 2,122
    Wald chi2(6) = 2140.99
    Log likelihood = -652.47408 Prob > chi2 = 0.0000
    OPG
    choice Coefficient std. err. z P>z [95% conf. interval]
    Mean
    x1 2.020137 .5630591 3.59 0.000 .9165613 3.123712
    x2 -112.2809 26.868 -4.18 0.000 -164.9413 -59.62062
    x3 193.0917 45.25788 4.27 0.000 104.3879 281.7955
    x4 229.0887 42.99712 5.33 0.000 144.8159 313.3615
    x5 9.790245 3.151583 3.11 0.002 3.613255 15.96723
    negprice -5.43188 .3157098 -17.21 0.000 -6.05066 -4.8131
    SD
    x1 .2036296 4.910859 0.04 0.967 -9.421478 9.828737
    x2 -68.31679 40.64345 -1.68 0.093 -147.9765 11.34291
    x3 55.66972 48.15007 1.16 0.248 -38.70268 150.0421
    x4 175.8745 56.2576 3.13 0.002 65.61161 286.1374
    x5 15.9116 4.383527 3.63 0.000 7.320043 24.50315
    negprice 1.322363 .4069266 3.25 0.001 .5248016 2.119925
    Warning: Convergence not achieved.
    The sign of the estimated standard deviations is irrelevant: interpret them as being positive
    I was wondering if my results could still be used, even if it did not converge, but I wanted to see if anybody had a solution to this problem?
    Thank you,
    Amanda
Working...
X