Announcement

Collapse
No announcement yet.
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #16
    I have one more question regarding my regressions, today I did the following:
    . regress Dependent centered_ind2 c.centered_ind2##c.centered_ind2 c.centered_ind2##c.centered_ind2##c.centered_ind2 centered_mod c.centered_ind2##c.centered_mod (c.centered_ind2
    > ##c.centered_ind2)##c.centered_mod (c.centered_ind2##c.centered_ind2##c.centered_ind2 )##c.centered_mod Control2 Control1 Dummy2 Dummy3 Dummy4 Dummy5 Dummy6 Dummy7 Dummy8 Dummy9
    > Dummy10 Dummy11
    Following marginsplot:
    . margins, at(centered_ind2 = (0(0.05)1) centered_mod = (0(0.2)1)
    and:
    marginsplot, noci

    Which gave me the following output:

    Still the results were not significant.
    Although the centered moderating variable has a negative coefficient, when it increases the line goes up.
    How can I interpret this?
    Because what I expected to happen was that cultural diversity has a negative moderating effect on the relationship between international diversification and firm performance, testing for an S-shaped relationship.
    Attached Files

    Comment


    • #17
      In this thread you have been asked, several times, by me and others, to read and follow the advice in the FAQ, espeically #12. Yet here we are, with an unreadable screenshot of your regression results.

      Since I can't read your regression output, I am responding here based on your description and the -marginsplot- graph. Consequently, this may be a waste of both your time and mine.

      Although the centered moderating variable has a negative coefficient, when it increases the line goes up.
      The coefficient of the centered moderating variable does not represent the impact of the moderating variable in the model. It only represents the impact of the moderating variable at the point where centered_ind2 = 0, and even at that, it is just the additive effect of centered_mod at that point--it has nothing to do with how centered_mod changes the effect of centered_ind2.
      In this model, which includes quadratic and cubic effects of centered_ind2, that moderation is distributed across several different terms, namely the coefficients of centered_ind2#centered_mod, centered_ind2#centered_ind2#centered_mod, and centered_ind2#centered_ind2#centered_ind2#centered _mod. Consequently the moderation effect is complicated and it simply cannot be read off as the coefficient of any one term. That's why these marginsplots are so important. You interpret all of this by looking at what's going on in the graphs. Clearly here, as centered_mod increases, the Dependent:centered_ind2 relationship curves get both higher and steeper.

      As for statistical significance, the significance of any of these coefficients by themselves is meaningless. Assuming you are going to concern yourself with statistical significance at all, you have to jointly test the significance of all of those interaction coefficients. Similarly, if you want a statistical significance assessment of the role of centered_ind2 in the model, you have to base it on a joint test of all of the terms that mention centered_ind2: the linear, the quadratic, the cubic, and all of the interaction terms of those with centered_mod.

      You are dealing with a very complicated model; interpreting it requires good graphical visualization and, if you want to do significance testing, joint significance tests.

      Comment


      • #18
        In this thread you have been asked, several times, by me and others, to read and follow the advice in the FAQ, espeically #12. Yet here we are, with an unreadable screenshot of your regression results.

        Since I can't read your regression output, I am responding here based on your description and the -marginsplot- graph. Consequently, this may be a waste of both your time and mine.
        I really appreciate your help Clyde, thank you very much! I thought I've done it correctly this time, by saving the graph as an png file, not making a screenshot.
        For checking the joint significance I just did some reading. I have to perform a Wald Test, with the command test.

        If I read it correctly it will look like this:
        test centered_ind2 c.centered_ind2##c.centered_ind2 c.centered_ind2##c.centered_ind2##c.centered_ind2 c.centered_ind2##c.centered_mod ..... and so on?
        That is for the cubic model, and I'll have to do them for the linear and also quadratic model.
        If than the p value turns out to be significant I can conclude the model is significant and with the help of the marginsplots say what the effect is?

        These are the results as I have them in my thesis right now. Hypothesis 1, linear. Hypothesis 2, inverted U-shaped. Hypothesis 3, S-shaped. Hypothesis 4, suggesting a negative moderating effect of cultural diversity.
        ROA on International diversification score Hypothesis 1 (B.1) Hypothesis 2 (B.2) Hypothesis 3 (B.3)
        Coe t Coe t Coe
        International diversification score 40.5164** 2.03 96.4971 1.60 152.527 1.15
        International diversification score (Squared) -955.558 -0.98 -3226.435 -0.66
        International diversification score (Cubic) 22727.22 0.48
        Number of employees -0.3321 -0.97 -0.3555 -1.04 -0.3479 -1.05
        Number of countries 0.0052 0.12 0.0003 0.01 0.00181 0.05
        Constant 5.5988** 2.07 5.5463** 2.05 5.2982* 1.92
        Observations 550 550 550
        R-squared 0.0222 0.0239 0.0243
        ROA on International diversification score + Cultural diversification score Hypothesis 4 (B.1+M) Hypothesis 4 (B.2+M) Hypothesis 4 (B.3+M)
        Coe t Coe t Coe
        International diversification score 39.2576* 1.85 54.0716* 1.93 41.1330 1.24
        International diversification score (Squared) -567.8617 -0.51 -2071.867 -0.90
        International diversification score (Cubic) 40788.96 0.73
        Cultural diversification score -1.2393 -0.35 -4.6743 -0.92 -5.1162 -0.88
        Number of employees -.3365 -0.98 -.3466 -1.01 -.3326 -0.97
        Number of countries .0068 0.20 .0015 0.04 .0037 0.11
        Constant 6.3980** 2.35 6.9096** 2.45 7.1303** 2.52
        Observations 550 550 550
        R-squared 0.0224 0.0256 0.0266
        Table 6: Regression results Model B * Significant at p < 0.1. ** Significant at p < 0.05. *** Significant at p < 0.01.
        Last edited by Christiaan Rijsen; 06 Jan 2018, 13:46.

        Comment


        • #19
          I thought I've done it correctly this time, by saving the graph as an png file, not making a screenshot.
          The graph came through just fine. It was the regression output that was unreadable. Regression output should not be posted as an image of any kind. It should be pasted as text into the Forum editor and wrapped between code delimiters so it will be displayed in a nicely aligned, fixed-width-font way.

          If I read it correctly it will look like this:
          test centered_ind2 c.centered_ind2##c.centered_ind2 c.centered_ind2##c.centered_ind2##c.centered_ind2 c.centered_ind2##c.centered_mod ..... and so on?
          Yes, precisely.

          If than the p value turns out to be significant I can conclude the model is significant and with the help of the marginsplots say what the effect is?
          Yes, if that is the business you are in. As you may know, I am not enthusiastic about significance testing as a way of choosing among models, but if you are using that approach, that is how it is done. And, yes, examining the marginsplots makes it possible to accurately describe what variables are doing what to which other variables: piecing it together from the regression outputs directly is a painful exercise in tedious and error-prone algebra that is best avoided by resorting to -margins- and -marginsplot-.

          One comment. It is a bit unusual to use S-shaped and cubic as synonymous, as you do here. The term S-shaped is usually used to refer to graphs of logistic functions. For one thing, to the extent that cubic relationships are S-shaped, the S-curve is "on its side" (i.e. reflected around the principal diagonal). Moreover, there are plenty of cubic polynomials whose graphs are not S-shaped at all. And even those cubic functions that are, from a global perspective, S-shaped on their sides, may not look that way at all when restricted to the observed range of the data, as appears to be the case in your graphs. Finally, to make matters even more complicated, it is entirely possible when you have a cubic model with a moderating factor, that the shape of the curve itself will change depending on the value of the moderator! So for all of these reasons, I would just refer to this as a cubic relationship.

          Comment


          • #20
            Yes, if that is the business you are in. As you may know, I am not enthusiastic about significance testing as a way of choosing among models, but if you are using that approach, that is how it is done. And, yes, examining the marginsplots makes it possible to accurately describe what variables are doing what to which other variables: piecing it together from the regression outputs directly is a painful exercise in tedious and error-prone algebra that is best avoided by resorting to -margins- and -marginsplot-.
            Thank you Clyde, this really helped me out once again. I just did the tests and found out that one of them (without moderator and interaction effect) is significant, at p < 0.10.
            . test Independent2 c.Independent2#c.Independent2
            ( 1) Independent2 = 0
            ( 2) c.Independent2#c.Independent2 = 0
            F( 2, 535) = 2.54
            Prob > F = 0.0797

            One comment. It is a bit unusual to use S-shaped and cubic as synonymous, as you do here. The term S-shaped is usually used to refer to graphs of logistic functions. For one thing, to the extent that cubic relationships are S-shaped, the S-curve is "on its side" (i.e. reflected around the principal diagonal). Moreover, there are plenty of cubic polynomials whose graphs are not S-shaped at all.
            That is true, I can see it in the marginsplots that in one of my models it does not even look like an S-shaped or inverted U-shaped curve. But for my hypothesis I wrote it down like that because I expect the effect of international diversification on firm performance in three phases. Phase 1, negative. Phase 2, positive. Phase 3, negative, forming an S.

            I hope with all this helpful information, I will now be able to finish me thesis.
            Thank you very much Clyde.

            Comment


            • #21
              Just to be sure. While in my regression I did not find significant results, but by conducting the Wald test I found joint significance. I can say that my findings support this hypothesis? Or at least say the relationship is found to be jointly significant, because the predicted relationship (inverted U-shape) is not shown in the margins plot.
              Last edited by Christiaan Rijsen; 07 Jan 2018, 10:48.

              Comment


              • #22
                Just to be sure. While in my regression I did not find significant results, but by conducting the Wald test I found joint significance. I can say that my findings support this hypothesis?
                The samples that generate linear and quadratic components this large are uncommon (i.e. less than 10%, indeed less than 8%, of all possible samples) if the data for the entire population are correctly described by a model in which the linear and quadratic coefficients are both zero. That is the literal meaning of the Wald test results. Within the paradigm of null hypothesis significance testing this normally often rephrased, saying the data are consistent with a quadratic model. Within the null hypothesis significance testing paradigm you can never really say that the data support a particular non-null model: you can only reject the null model. And it may well be that there are other models you could apply to the data and also reject a null hypothesis of zero-parameters in those models. Obtaining statistically significant findings for one model never really excludes the possibility of other models, other than the null model. So significance testing never really permits you to say that the data support your particular model. As far as testing for a quadratic model is concerned, this is as good as it gets in null hypothesis significance testing.

                If you are worried that the absence of separate statistical significance for the linear and quadratic coefficients is a problem and undercuts your conclusions, it is not, and it doesn't. The separate statistical significance of the separate linear and quadratic coefficients is not meaningful in a quadratic model.

                Comment


                • #23
                  If you are worried that the absence of separate statistical significance for the linear and quadratic coefficients is a problem and undercuts your conclusions, it is not, and it doesn't. The separate statistical significance of the separate linear and quadratic coefficients is not meaningful in a quadratic model.
                  Once again, thank you very much Clyde! Your help is really appreciated.

                  Comment

                  Working...
                  X