Announcement

Collapse
No announcement yet.
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Originally posted by Clyde Schechter View Post
    I don't understand this question. You don't show the command--just some very incomplete output, so I can't tell what you're asking about. But, more profoundly, interaction terms do not have marginal effects. If you try to take the marginal effect of an interaction term, Stata will refuse and give you an error message. So I don't know what you are asking.

    Please show full code and output for whatever it is you want help interpreting.
    I read Dr. Williams' slides again and had some new understanding of what marginal effect is. I used to think you have to include the interaction term in the regression in order to obtain the marginal effect afterwards. But now I think you do not need to do that.
    Code:
    margins, dydx (invest) at (sex=(0 1))
    will generate the same results. Am I right?

    Comment


    • #32
      Have tested it myself and you have to run the regression first before you can use the margins command in Stata.

      I have been thinking about the difference between having all the independent variables interacted with the wave variable (2 waves) in a logistic panel regression and having only one independent variable interacted with the wave variable. My understanding is the former is the same as running two regressions each on the two subgroups and then conducting a T-test to compare the coefficients from the two subgroups. But if you are interested in comparing the coefficients of one variable (probabilities) within each subgroup, is it necessary to conduct what I call the all-item-interacted regression? And is there a name for this kind of interaction?

      My understanding is when you interact this one independent variable with the wave variable, all the other variables are "unaffected." But when you interact all the independent variables with the wave variable, they are all "affected." But technically, I don't know what the difference is. Can anyone explain it?

      Thank you.

      Comment


      • #33
        When you interact with all of the model variables, it is equivalent to running two separate regressions and then using something like -suest- (which doesn't support -xtlogit- but does support most regression models) to do your comparisons.

        When you interact with only your one predictor of interest, it is like running two separate regressions with one very big difference: you are constraining the coefficients of all the other variables to be the same in both models. If it is scientifically appropriate to presume that the coefficients of all the other variables would be the same (or close enough for practical purposes) in both models, then using only this one interaction is simpler. But if you think that the other coefficients might differ appreciably between the models were they run separately, then you need to interact with them as well.

        Comment


        • #34
          you are constraining the coefficients of all the other variables to be the same in both models
          If we interact gender with the independent variable, then can we say we are creating the female and male counterparts?

          I wonder if the trend disappears or reverses when I interact gender with the independent variable, whether this is a case of the Simpson's Paradox. If it is not, then how do I interpret it?
          Last edited by Meng Yu; 21 Aug 2020, 02:30.

          Comment


          • #35
            I don't know what you mean by "creating the female and male counterparts." When you interact gender with the independent variable, you are using a model in which the marginal effect of that independent variable on outcome can be different for males from what it is for females.

            It is indeed possible that those separate marginal effects will differ in any imaginable way from an undifferentiated marginal effect estimated in a model without the interaction term. And yes, this would be an instance of Simpson's paradox (also sometimes called Lord's paradox when it arises in a regression analysis.)

            Comment


            • #36
              Thank you.
              I guess I am trying to say to constrain all the other characteristics to be the same is like creating "ideal" comparison groups which do not exist in real data.

              Comment


              • #37
                I guess I am trying to say to constrain all the other characteristics to be the same is like creating "ideal" comparison groups which do not exist in real data.
                That is, in a sense, true.

                On the other hand, when you add an interaction term between the effect modifier and everything you double the number of predictors in the model--so unless you are starting with a very large data set, you can end up under-powered. And why stop just with twoway interactions. Maybe the effect modification itself varies according to the value of a third variable. So then we need a three-way interaction. But then why assume that that same third variable doesn't also modify the effects of all the other variables too. If you carry this to its logical conclusion, an n-variable regression expanded to include all possible interactions becomes a 2n variable regression. The point is, for practical purposes, at some point you have to assume that some things don't modify the effects of certain other things. If not, your models blow up and it becomes impossible to get enough data to meaningfully estimate all the interactions. So you have to use your best understanding of the actual real-world dynamics of the situation you are modeling to decide that, at least for practical purposes, if not in ultimate reality, most interactions are negligibly small. Without that idealized assumption, you can never even really get started on anything but a toy problem.

                Comment


                • #38
                  Thank you. I wonder if I interact the wave variable with all of the predictor variables to determine if timing makes a difference to the relationship between the independent variable and outcome variable, if this is called a sensitivity analysis. Still don't quite get the definition of sensitivity analysis.

                  Comment


                  • #39
                    A sensitivity analysis is just any analysis that uses one or more different assumptions from the original analysis. So if you have a model like -regression_comand Y i.M##X U V W- as your base case, and you then do -regression_command Y i.m##(X U V W)- that is one particular sensitivity analysis because it changes the first model's implicit assumption that the effects of U, V, and W on Y do not depend on the value of M

                    Other common forms of sensitivity analysis include things like:

                    a) Replacing a variable X1 with variable X2 which is an alternative measure of the same construct.
                    b) Adding or removing a variable from the model.
                    c) Adding or removing a constraint on the value of one or more coefficients.
                    d) Changing the time at which an intervention takes effect.

                    These are just the most frequent ones, in my experience.

                    Comment


                    • #40
                      Thank you. If I add an interaction term to the regression which tests only main effects of the predictor variables, is that called a sensitivity analysis?

                      Comment


                      • #41
                        I don't understand what you man by "an interaction term which tests only main effects of the predictor variables." But adding an interaction term that isn't in your base model is a sensitivity analysis, yes.

                        Comment


                        • #42
                          Originally posted by Clyde Schechter View Post
                          I don't understand. The "reference group" for the marginal effect of invest is the invest = 0 group for both males and females. If you want to contrast the marginal effects of invest in males vs. females you can do:
                          Code:
                          margins sex, dydx(invest) pwcompare
                          I understand when using margins, dydx (invest) at (sex=( 0 1)), I obtain the comparisons within each sex. But if the effect (coefficient) of invest for women is bigger than that of men, can I still make a statement like "the effect of invest is stronger for women than men?"

                          Comment


                          • #43
                            This thread has gotten very long, and it is hard to remember, or chase back through the thread, the underlying model. Assuming that your model is adequate, I suppose there is no harm in using "stronger" as a synonym for "bigger." Still, I think it is best to stick to mathematical terminology and just say "The marginal effect of invest on (whatever your dependent variable is) for women, adjusted for (whatever your model adjusts for), is greater than that for men."

                            Comment


                            • #44
                              Thank you for your reply. Is your interpretation for the command
                              Code:
                              margins, dydx (invest) at (sex=(0 1))
                              ? My question was whether it is OK to interpret like what you did. Because by using this command, I am comparing within each sex category, rather than between men and women.

                              Comment


                              • #45
                                The output of -margins, dydx(invest) at(sex = (0 1))- will be a table with two rows. One gives the marginal effect of invest on the outcome, adjusted for other model variables, for sex = 0, and the other for sex = 1. It is not a contrast of the marginal effects of invest male vs female. If you want that contrast, you can add the -pwcompare- option to the command and you will get that. (Or, if this was a linear model in the first place and you had a sex#invest interaction term, you can just look at the regression output itself, focusing on the row showing the sex#invest interaction coefficient.)

                                Comment

                                Working...
                                X