Announcement

Collapse
No announcement yet.
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Not concave issue with multinomiallogit regression

    Dear Satalis,

    I hope you are well. I would like to ask please regarding the problem of 'not concave' iteration. I have a dataset for 300 firms, one of the firms appears to be an outlier. When I excluded the outlier (i.e. to be only 299 firms) from the multinomial regression analysis (mlogit) the analysis iteration took a long time to perform the iteration and showed non-concavity in some iteration process. How to solve this problem please? Important to mention that when the outlier is not excluded the regression runs perfect but I got small value for the marginal effects results (for instance, 3.94E) for only one of the dependent variable estimation.



    mlogit App_status i.I_sec i.AF_LEG i.AF_AGE i.AF_SIZE i.I_loct2 i.I_expt2 i.AF_GRWT i.BO_GEN i.BO_CIT i.BO_AGE i.ow_Exper2 i.BO_FINT i.BO_EDU i.CR_LEN i.CRBS1 i.CR_BS2 i.CR_BS3 i.CR_BS4 i.CR_BS5 i.CR_BS6 i.CR_BS7 i.CR_BS8 i.CR_SAT i.DE_ADS1 i.DE_ADS2 i.DE_ADS3 i.DE_ADS4 i.DE_ADS5 i.DE_ADS6 i.DE_ADS72 i.EI_BP i.EI_AUDFR

    Iteration 0: log likelihood = -376.13767
    Iteration 1: log likelihood = -240.52371
    Iteration 2: log likelihood = -204.10375
    Iteration 3: log likelihood = -190.49561
    Iteration 4: log likelihood = -182.16412
    Iteration 5: log likelihood = -174.29973
    Iteration 6: log likelihood = -169.14731
    Iteration 7: log likelihood = -167.60249
    Iteration 8: log likelihood = -167.36518
    Iteration 9: log likelihood = -167.30934
    Iteration 10: log likelihood = -167.29729
    Iteration 11: log likelihood = -167.29478
    Iteration 12: log likelihood = -167.29422
    Iteration 13: log likelihood = -167.29408
    Iteration 14: log likelihood = -167.29405
    Iteration 15: log likelihood = -167.29405 (not concave)
    Iteration 16: log likelihood = -167.29405 (not concave)
    Iteration 17: log likelihood = -167.29405 (not concave)
    Iteration 18: log likelihood = -167.29405 (not concave)
    Iteration 19: log likelihood = -167.29405 (not concave)
    Iteration 20: log likelihood = -167.29405 (not concave)
    Iteration 21: log likelihood = -167.29405 (not concave)
    Iteration 22: log likelihood = -167.29405 (not concave)
    Iteration 23: log likelihood = -167.29405 (not concave)
    Iteration 24: log likelihood = -167.29405 (not concave)
    Iteration 25: log likelihood = -167.29405 (not concave)
    Iteration 26: log likelihood = -167.29405 (not concave)
    Iteration 27: log likelihood = -167.29405 (not concave)
    Iteration 28: log likelihood = -167.29405 (not concave)
    Iteration 29: log likelihood = -167.29405 (not concave)
    Iteration 30: log likelihood = -167.29405 (not concave)
    Iteration 31: log likelihood = -167.29405 (not concave)
    Iteration 32: log likelihood = -167.29405 (not concave)
    Iteration 33: log likelihood = -167.29405 (not concave)
    Iteration 34: log likelihood = -167.29405 (not concave)
    Iteration 35: log likelihood = -167.29405 (not concave)
    Iteration 36: log likelihood = -167.29405 (not concave)
    Iteration 37: log likelihood = -167.29405 (not concave)
    Iteration 38: log likelihood = -167.29405 (not concave)
    Iteration 39: log likelihood = -167.29405 (not concave)
    Iteration 40: log likelihood = -167.29405 (not concave)
    Iteration 41: log likelihood = -167.29405 (not concave)
    Iteration 42: log likelihood = -167.29405 (not concave)
    Iteration 43: log likelihood = -167.29405 (not concave)
    Iteration 44: log likelihood = -167.29405 (not concave)
    Iteration 45: log likelihood = -167.29405 (not concave)
    Iteration 46: log likelihood = -167.29405 (not concave)
    Iteration 47: log likelihood = -167.29405 (not concave)
    Iteration 48: log likelihood = -167.29405 (not concave)
    Iteration 49: log likelihood = -167.29405 (not concave)
    Iteration 50: log likelihood = -167.29405 (not concave)
    Iteration 51: log likelihood = -167.29405 (not concave)
    Iteration 52: log likelihood = -167.29405 (not concave)
    Iteration 53: log likelihood = -167.29405 (not concave)
    Iteration 54: log likelihood = -167.29405 (not concave)
    Iteration 55: log likelihood = -167.29405 (not concave)
    Iteration 56: log likelihood = -167.29405 (not concave)
    Iteration 57: log likelihood = -167.29405 (not concave)
    Iteration 58: log likelihood = -167.29405 (not concave)

    --Break--
    r(1);




    Could you please advise on how to solve the problem of the non-concavity with mlogit analysis?
    In case if I got small value such as 3.94E for the marginal effect, can I interpret these results to explain the relationship between the dependent and independent variables or should I exclude them (i.e. results) from the discussion part as they not given sense? in other words, what I should do when I got results for the marginal effect with very small values? How I should deal with?


    Appreciate your kind help and cooperation

    Best regards,
    Rabab
    Last edited by Rabab Al hasni; 30 Oct 2019, 06:23.

  • #2
    You may try to add the option - difficult - for that matter.
    Best regards,

    Marcos

    Comment


    • #3
      Originally posted by Marcos Almeida View Post
      You may try to add the option - difficult - for that matter.
      Dear Marcos,

      Many thanks for your reply. I added the option difficult but still I have the problem as below shows:

      mlogit App_status i.I_secr i.AF_LEG i.AF_AGE i.AF_SIZE i.I_loct2 i.I_expt2 i.AF_GRWT i.BO_GEN i.BO_CIT i.BO_AGE i.ow_Exper2 i.BO_FINT i.BO_EDU i.CR_LEN i.CR_BS1 i.CR_BS2 i.CR_BS3 i.CR_BS4 i.CR_BS5 i.CR_BS6 i.CR_BS7 i.CR_BS8 i.CR_SAT i.DE_ADS1 i.DE_ADS2 i.DE_ADS3 i.DE_ADS4 i.DE_ADS5 i.DE_ADS6 i.DE_ADS72 i.EI_BP i.EI_AUDFR, difficult

      Iteration 0: log likelihood = -240.93338
      Iteration 1: log likelihood = -151.62102
      Iteration 2: log likelihood = -120.08411
      Iteration 3: log likelihood = -109.30193
      Iteration 4: log likelihood = -100.87127
      Iteration 5: log likelihood = -93.535301
      Iteration 6: log likelihood = -88.861266
      Iteration 7: log likelihood = -87.941263
      Iteration 8: log likelihood = -87.795143
      Iteration 9: log likelihood = -87.760216
      Iteration 10: log likelihood = -87.751991
      Iteration 11: log likelihood = -87.75031
      Iteration 12: log likelihood = -87.750035
      Iteration 13: log likelihood = -87.749969
      Iteration 14: log likelihood = -87.749955 (not concave)
      Iteration 15: log likelihood = -87.749954 (not concave)
      Iteration 16: log likelihood = -87.749954 (not concave)
      Iteration 17: log likelihood = -87.749953 (not concave)
      Iteration 18: log likelihood = -87.749953 (not concave)
      Iteration 19: log likelihood = -87.749953 (not concave)
      Iteration 20: log likelihood = -87.749953 (not concave)
      Iteration 21: log likelihood = -87.749953 (not concave)
      Iteration 22: log likelihood = -87.749953 (not concave)
      Iteration 23: log likelihood = -87.749953 (not concave)
      Iteration 24: log likelihood = -87.749953 (not concave)
      Iteration 25: log likelihood = -87.749953 (not concave)
      Iteration 26: log likelihood = -87.749953 (not concave)
      Iteration 27: log likelihood = -87.749953 (not concave)
      Break--
      r(1);

      I am wondering if the sample size is the issue the cause this problem where the unorder dependent variable consist of A 197 firms, B 83 firms, and C 19 firms.
      Any suggestions please for the issue of non-concavity.

      Thank you very much

      Best regards,
      Rabab

      Comment


      • #4
        Hi Rabab
        The problem is that your model has too many explanatory variables. Most of them dummies, from the looks of it, and a small sample. That is why the estimator is having difficulties finding a solution to the maximization problem.
        Start by estimating simpler models, and build from there.
        Fernando

        Comment


        • #5
          Hi

          I have tried other option called 'nonrtolerance' and tech(dfp nr). The regression run good but when it comes to apply the marginal effect for only the category C that has 19 firms the results appear as the following:

          Code1

          mlogit M_App_status i.I_sec i.AF_LEG i.AF_AGE i.AF_SIZE i.I_loct2 i.I_export2 i.AF_GRWT i.BO_GEN i.BO_CIT i.BO_AGE i.ow_Exp2 i.BO_FINT i.BO_EDU i.CR_LEN i.CR_BS1 i.CR_BS2 i.CR_BS3 i.CR_BS4 i.CR_BS5 i.CR_BS6 i.CR_BS7 i.CR_BS8 i.CR_SAT i.DE_ADS1 i.DE_ADS2 i.DE_ADS3 i.DE_ADS4 i.DE_ADS5 i.DE_ADS6 i.DE_ADS72 i.EI_BP i.EI_AUDFR, nonrtolerance



          Code2

          . margins, dydx(*) atmeans predict(pr outcome(1))
          Warning: variance matrix is nonsymmetric or highly singular

          Conditional marginal effects Number of obs = 299
          Model VCE : OIM


          dy/dx Delta-methodStd. Err. z P>z [95% Cof. Interval]
          I_sec
          X1 -4.34E-94 . . . .
          X3 -4.34E-94 . . . .
          AF_LEG
          X 2.46E-71 . . . .
          X2 1.13E-72 . . . .
          AF_AGE
          X1 1.91E-95 . . . .
          X2 -4.00E-112 . . . .
          X3 9.12E-97 . . . .
          AF_SIZE
          X1 -2.59E-50 . . . .
          X2 -2.59E-50 . . . .
          X3 -2.59E-50 . . . .



          What do the dots in the above table means? Could you please help?

          Best regards,
          Rabab
          Last edited by Rabab Al hasni; 30 Oct 2019, 08:52.

          Comment


          • #6
            Originally posted by FernandoRios View Post
            Hi Rabab
            The problem is that your model has too many explanatory variables. Most of them dummies, from the looks of it, and a small sample. That is why the estimator is having difficulties finding a solution to the maximization problem.
            Start by estimating simpler models, and build from there.
            Fernando

            Dear FernandoRios,

            Thank you for your advice. I will check my model and try to reduce unnecessary explanatory variables.

            Best regards,
            Rabab

            Comment


            • #7
              All the tips and ticks won't be enough to tackle the issue when there is a core problem in the model, such as excessive number of predictors, small sample size, etc.
              Best regards,

              Marcos

              Comment


              • #8
                Originally posted by Marcos Almeida View Post
                All the tips and ticks won't be enough to tackle the issue when there is a core problem in the model, such as excessive number of predictors, small sample size, etc.
                Many thanks Marcos for your advice

                Kind regards
                Rabab

                Comment

                Working...
                X