Announcement

Collapse
No announcement yet.
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Maximum likelihood estimators - “backed up” message

    Hi there,

    I have been trying to run the following probit model:


    // TRANSITION GRADE 1 TO 2
    probit yvar xvars if serie ==1

    but i keeping getting the “backed up” message throughout the interactions. As I was reading Stata guide, in order to solve that i need to use the gradient option to tight up the convergence criterion

    I need you guys help to set the gradient options.

    I tried many ways to set the gradient, but i couldnt`t do it.

    Example:
    probit yvar xvars gradient(ltol(0) tol(1e-7)) if serie ==1

    If anyone can help me, plese!!!

    Max


  • #2
    Max Resende The procedure suggested in the manual is in the case where the last iteration shows a backup message. That is the model has converged, but you need to verify that the gradient is close to zero. If it's not then you should have a more conservative convergence criterion to see if you can get out of this region.

    That being said if you keep getting the backup message without converging, then it is more likely the sign of a problem with your model. It could be for example that you have too few positive outcomes, some outliers or missing values, badly scaled data.

    To show the gradient and tighten the convergence criteria the syntax should be
    Code:
    probit yvar xvars if serie ==1 , gradient ltol(0) tol(1e-7)

    Comment


    • #3
      Christophe,

      I did as you said, but i keep getting the “backed up” message.

      My baseline model did just fine, which is:
      Code:
      probit transitioning i.genero ib1.raca i.zonares bolsa lnpib_pc  if serie ==1
      The problem emerges when I add more variables (“backed up” message) to the model:

      Code:
      probit transitioning i.genero ib1.raca i.zonares bolsa lnpib_pc i.turno transfer distorcao faltas alunos_turma  if serie ==1
      My dataset has over 1400 000 informations for more than 400 000 students.

      Do you have any other suggestions?

      Max

      Comment


      • #4
        Make note of how many iterations have run when you hit the problem. Then re-run the same model, adding the -iterate()- option specifying a number that will get you well into the point where you are backing up repeatedly. The model will then stop after that number of observations and show you what the results look like so far. These results are not valid results, but by looking at them you may be able to identify a specific variable or subset of variables that are causing the difficulty. You will recognize those because they will have outlandishly large standard errors, or coefficients that are outlandishly large (positive or negative). Those variables will be the ones that are making the model difficult to estimate and your best solution is to eliminate those variables from the model.

        Before you do that, however, you need to follow-up on Christophe's excellent suggestion that you may just have too few responses in one or the other categories. So, run -probit- and then run -tab transitioning if e(sample)-. Also run -summ- on each of your model variables. If there are some variables whose scales are markedly different from the rest, re-scale those so they are similar to the others, and then try again.

        Comment


        • #5
          Clyde and Chris,

          I did what you both suggested me . I re-run the model adding iterate in order to find out which variables were causing the difficulty and verify the scales of variables.

          Code:
          probit transitioning i.genero ib1.raca i.zonares bolsa lnpib_pc i.turno transfer faltas alunos_turma biblio func_UE prof_idade distorcao if serie ==1, iterate(6)
          and I noticed that i have 2 variables (distorcion and absence (faltas) that were causing the difficulties. So I choose to eliminate distorcion out of the model and the iteraction went all the way.

          But i have a few questions for you both:

          1) distortion is a binary variable, how came this could affect the probit? I have other binary variables in the model that didn`t cause this issue.

          2) Instead of tighten the convergence criteria (as Chris and the manual advice) if I change the default beginning point of Newtom-Raphson wouldn`t work?

          Code:
          probit transitioning i.genero ib1.raca i.zonares bolsa lnpib_pc i.turno transfer faltas alunos_turma biblio func_UE prof_idade distorcao if serie ==1, solvenl_init_conv_nearzero(S, ztol).
          Stata reports me option solvenl_init_conv_nearzero() not allowed.


          Any suggestions?

          Max
          Last edited by Max Resende; 14 Aug 2017, 17:50.

          Comment


          • #6
            The fact that faltas is a dichotomous variable makes it no more, nor less, likely to cause convergence problems. There are a few different ways faltas can cause this. One possibility is that faltas is almost always zero or almost always 1. Try -tab faltas if e(sample)- to see.

            Another possibility is that faltas nearly perfectly predicts the outcome (in one direction or the other). Try -tab faltas transitioning if e(sample)- to see.

            Sometimes these simple -tab- commands don't reveal the problem if this only happens in certain combinations of other variables. It's basically impossible to try out all the possible combinations when you have so many variables. So, even if you haven't fully understood why, you'll just have to accept that the model cannot be estimated if it includes faltas and distorcao.

            As for your second question, as was pointed out by Christophe, fiddling around with the estimation parameters is only helpful when the estimation actually converges, but leaves you with "backed up" in the final iteration. Since your estimation did not converge, these approaches will not help you. Don't waste any more time with them. That said I am not familiar with any -solvenl_init_conv_nearzero(S, ztol)- option, and I don't even know where you found it or how it might work even if it were appropriate to try it.

            Comment

            Working...
            X