Announcement

Collapse
No announcement yet.
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #16
    Exactly right!

    Comment


    • #17
      Exactly right!

      Comment


      • #18
        I remember I said "Thank you", but for some reason, that post disappeared.

        Comment


        • #19
          Originally posted by Meng Yu View Post
          I have read Dr. Williams' article and I think I understand what marginal effects are. They are similar to regression coefficients if the dependent variable is continuous.
          Do you mind if I confirm with you my interpretation of marginal effects coefficients in a xtlogit model? Suppose the model is
          Code:
          xtlogit health frequency##sex
          and the marginal effects code is
          Code:
          margins, dydx (frequency) at (sex=(0 1))
          Suppose the results are:
          ------------------------------------------------------------------------------
          | Delta-method
          | dy/dx P>|z|
          -------------+----------------------------------------------------------------
          1. freq |
          _at |
          male | 0.02 0.000
          female | 0.03 0.000
          -------------+----------------------------------------------------------------
          2.freq |
          _at |
          male | 0.05 0.234
          female | 0.10 0.000
          -------------+----------------------------------------------------------------

          My interpretation is: Compared to those who do not participate, the probability for male participants with frequency 1 to suffer from a health problem is 2 percentage points higher; whereas for women with this frequency it is 3 percentage points higher. For female participants with frequency 2, the probability is 10 percentage points higher, yet for male participants with this frequency, the result is not significant.

          Thank you.
          Am I right the reference group is male who do not participate? The reference group in interaction terms are always the reference group in one variable plus the reference group in the other variable. Is that right?

          Comment


          • #20
            Yes, the reference group for interactions is the pairing up of the reference group of each of the interaction's component variables.

            Comment


            • #21
              Thank you. What about when I run a marginal effects command like this? Is there still a reference group?
              Code:
               
              Average marginal effects Number of obs = 21,000
              Model VCE : Bstrap *
              Expression : Linear prediction, predict()
              dy/dx w.r.t. : 1. invest
              1._at : sex = 0
              2._at : sex = 1
              Delta-method
              dy/dx Std. Err. z P>z [95% Conf. Interval]
              1.invest
              _at
              1 0.127588 0.080161 1.59 0.111 -0.02952 0.284701
              2 0.097702 0.074792 1.31 0.191 -0.04889 0.244291

              Comment


              • #22
                I don't understand this question. You don't show the command--just some very incomplete output, so I can't tell what you're asking about. But, more profoundly, interaction terms do not have marginal effects. If you try to take the marginal effect of an interaction term, Stata will refuse and give you an error message. So I don't know what you are asking.

                Please show full code and output for whatever it is you want help interpreting.

                Comment


                • #23
                  Code:
                  margins, dydx (invest) at (sex= (0 1))
                  was the command after I ran a xtlogit regression with the dependent variable on well-being. In the regression model, I interacted investing in foreign countries with gender to see if gender moderates the relationship between investing and well-being. I used margins command to make the interpretation of the results easier. And I thought that action was called "taking the marginal effects of the interaction terms."

                  Comment


                  • #24
                    So, what you are taking here is an average effect of invest, separately at sex = 0 and sex = 1. I will assume from your description in #23 at the variable invest is also a dichotomous variable. Then the output you get for sex = 0 will be the expected difference in the probability of your dependent variable among those with sex = 0 between those who invest and those who do not. Similarly the output for sex = 1 will be the expected difference in probability of your dependent variable among those with sex = 1 between those who invest and those who do not.

                    Comment


                    • #25
                      So you can't compare males and females using this command, since they don't have the same reference group.

                      Comment


                      • #26
                        So you can't compare males and females using this command, since they don't have the same reference group.
                        I don't understand. The "reference group" for the marginal effect of invest is the invest = 0 group for both males and females. If you want to contrast the marginal effects of invest in males vs. females you can do:
                        Code:
                        margins sex, dydx(invest) pwcompare

                        Comment


                        • #27
                          Does this command achieve what this post on Aug 27 achieved? https://www.statalist.org/forums/for...interpretation

                          No diagnosis and no nature: reference group
                          Diagnosis and no nature: Odds ratio = 1.56 compared to reference
                          Nature and no diagnosis: Odds ratio = 0.63 compared to reference
                          Nature and diagnosis: Odds ratio = 1.56*0.63*1.30 (= 1.28)

                          Comment


                          • #28
                            No. The material you quote in #27 is about odds ratios, calculated directly from the logistic regression coefficients without use of -margins-, whereas the -margins- results are (differences in) probabilities. The command in #26 will give you a comparison of the marginal effect of invest on probability of outcome among males with the marginal effect of invest on probability of outcome among females. It will not give you odds ratios in any groups.

                            Comment


                            • #29
                              But technically I can calculate the probabilities from those odd rations, right? In Stata you always have to run the margins command right after the regression model. In my case, I conduct research using confidential data with bootstrap weights in a data centre. It takes quite a few hours for each regression to finish running. If converting to probabilities is a possibility, I would like to give it a try.

                              Comment


                              • #30
                                Actually, it's not that simple. In a linear model, it would be. But because of the non-linearity of the logistic model, the probabilities corresponding to particular values of the coefficients (or odds ratios) depend on the detailed distribution of the predictor variables. So if it isn't feasible to rerun the original regression and you have only the output table to work with, then you are pretty much limited to calculating the odds ratios. The only way I can suggest for getting, by hand calculation, to the probability metric is to pick a specific "representative" value for the variable health and calculate probabilities conditional on that. So, for example, if you pick some value of health, let's call it h0, you can calculate xb conditional on health = h0, sex = 0, frequency = 0 as the constant term plus the coefficient of health * h0. Then you can get the probability conditional on those same values as invlogit(xb). Similarly, say, for health = h0, sex = 1 and frequency = 1, xb is the constant term + the coefficient of health * h0 plus the coefficient of sex + the coefficient of 1.frequency + the coefficient of 1.sex#1.frequency, and again, the corresponding probability is the invlogit() of that.

                                This will give you those probabilities. I do not know how to calculate the standard errors for them, however, so I don't know how to get you the corresponding confidence intervals.

                                Added: Since this is panel data with -xtlogit-, the xb's and probabilities referred to above are also conditional on u = 0.

                                Comment

                                Working...
                                X