Announcement

Collapse
No announcement yet.
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • margins compare effects

    Hi all! I am interested in compare if two marginal effects are statistically different. For this I estimate the model (see below) interacting x1 (dummy variable) with the squared effect of z2) and then I ask for the margins at different levels of the variable (see also below) which I want to plot using the marginsplot. However, I would like to know if the dots in the graph are statistically different, but do not know how to do it. For instance, are the points (blue and red) different when z2 = 2? And when z2 = 4 … So, it would be to compare the difference between the points 1 and 10, 2 and 11, 3 and 12 ... that comes from the margins command.
    Any hint of how to compare only these two point at each level of the horizontal values?

    Code:
    . xi: reghdfe y L.c.x0 x1##(L.c.z1 L.c.z2##L.c.z2 L.c.z3##L.c.z3 L.c.z4##L.c.z4 L.c.z5 L.c.z6) L.z7 
    > L.z8, absorb(year sic , resid) cluster(id) 
    (MWFE estimator converged in 4 iterations)
    
    HDFE Linear regression                            Number of obs   =     37,658
    Absorbing 2 HDFE groups                           F(  22,   3919) =       2.72
    Statistics robust to heteroskedasticity           Prob > F        =     0.0000
                                                      R-squared       =     0.0494
                                                      Adj R-squared   =     0.0475
                                                      Within R-sq.    =     0.0028
    Number of clusters (id)      =      3,920         Root MSE        =    23.1123
    
                                       (Std. Err. adjusted for 3,920 clusters in id)
    --------------------------------------------------------------------------------
                   |               Robust
                 y |      Coef.   Std. Err.      t    P>|t|     [95% Conf. Interval]
    ---------------+----------------------------------------------------------------
                x0 |
               L1. |   2.49e-06   2.37e-06     1.05   0.294    -2.16e-06    7.13e-06
                   |
              1.x1 |   1.883213   1.235881     1.52   0.128    -.5398174    4.306244
                   |
                z1 |
               L1. |  -.2964713   12.13272    -0.02   0.981    -24.08351    23.49057
                   |
                z2 |
               L1. |  -4.026786   1.893019    -2.13   0.033    -7.738181   -.3153913
                   |
       cL.z2#cL.z2 |   .4009453     .19074     2.10   0.036     .0269863    .7749043
                   |
                z3 |
               L1. |   2.747229   1.249137     2.20   0.028     .2982093     5.19625
                   |
       cL.z3#cL.z3 |  -.1124632   .0625297    -1.80   0.072    -.2350571    .0101307
                   |
                z4 |
               L1. |   .0726518   .0541094     1.34   0.179    -.0334334    .1787369
                   |
       cL.z4#cL.z4 |  -.0003602   .0003384    -1.06   0.287    -.0010237    .0003034
                   |
                z5 |
               L1. |  -3.043936   1.390834    -2.19   0.029    -5.770762    -.317109
                   |
                z6 |
               L1. |  -.1185258   1.488848    -0.08   0.937    -3.037516    2.800464
                   |
          x1#cL.z1 |
                1  |   11.87853   31.12594     0.38   0.703    -49.14603    72.90309
                   |
          x1#cL.z2 |
                1  |   .0286455   2.927208     0.01   0.992    -5.710349     5.76764
                   |
    x1#cL.z2#cL.z2 |
                1  |   .3472693   .3311889     1.05   0.294    -.3020496    .9965882
                   |
          x1#cL.z3 |
                1  |   .1076445   1.333236     0.08   0.936    -2.506257    2.721546
                   |
    x1#cL.z3#cL.z3 |
                1  |  -.0340614   .0781688    -0.44   0.663    -.1873169     .119194
                   |
          x1#cL.z4 |
                1  |   .0377201   .0763688     0.49   0.621    -.1120064    .1874465
                   |
    x1#cL.z4#cL.z4 |
                1  |  -.0002813   .0006353    -0.44   0.658    -.0015268    .0009643
                   |
          x1#cL.z5 |
                1  |   10.37873   5.157279     2.01   0.044     .2675252    20.48993
                   |
          x1#cL.z6 |
                1  |   -8.44048   5.524799    -1.53   0.127    -19.27223    2.391272
                   |
                z7 |
               L1. |   5.277954   6.287417     0.84   0.401    -7.048964    17.60487
                   |
                z8 |
               L1. |   -7.52538    5.73605    -1.31   0.190     -18.7713    3.720544
                   |
             _cons |   9.368137   1.864782     5.02   0.000     5.712101    13.02417
    --------------------------------------------------------------------------------
    
    Absorbed degrees of freedom:
    -----------------------------------------------------+
     Absorbed FE | Categories  - Redundant  = Num. Coefs |
    -------------+---------------------------------------|
            year |        11           0          11     |
             sic |        43           1          42     |
    -----------------------------------------------------+
    
    . 
    . margins,  dydx(L.z2) at(L.z2=(0(2)16) x1=(0 1)) noestimcheck post vsquish 
    
    Average marginal effects                        Number of obs     =     37,658
    Model VCE    : Robust
    
    Expression   : Linear prediction, predict()
    dy/dx w.r.t. : L.z2
    1._at        : x1              =           0
                   L.z2            =           0
    2._at        : x1              =           0
                   L.z2            =           2
    3._at        : x1              =           0
                   L.z2            =           4
    4._at        : x1              =           0
                   L.z2            =           6
    5._at        : x1              =           0
                   L.z2            =           8
    6._at        : x1              =           0
                   L.z2            =          10
    7._at        : x1              =           0
                   L.z2            =          12
    8._at        : x1              =           0
                   L.z2            =          14
    9._at        : x1              =           0
                   L.z2            =          16
    10._at       : x1              =           1
                   L.z2            =           0
    11._at       : x1              =           1
                   L.z2            =           2
    12._at       : x1              =           1
                   L.z2            =           4
    13._at       : x1              =           1
                   L.z2            =           6
    14._at       : x1              =           1
                   L.z2            =           8
    15._at       : x1              =           1
                   L.z2            =          10
    16._at       : x1              =           1
                   L.z2            =          12
    17._at       : x1              =           1
                   L.z2            =          14
    18._at       : x1              =           1
                   L.z2            =          16
    
    ------------------------------------------------------------------------------
                 |            Delta-method
                 |      dy/dx   Std. Err.      z    P>|z|     [95% Conf. Interval]
    -------------+----------------------------------------------------------------
    L.z2         |
             _at |
              1  |  -4.026786   1.893019    -2.13   0.033    -7.737034   -.3165376
              2  |  -2.423005    1.14849    -2.11   0.035    -4.674003   -.1720065
              3  |  -.8192236   .4677103    -1.75   0.080    -1.735919    .0974718
              4  |   .7845576   .5316891     1.48   0.140    -.2575339    1.826649
              5  |   2.388339   1.229166     1.94   0.052    -.0207827     4.79746
              6  |    3.99212   1.975657     2.02   0.043     .1199029    7.864337
              7  |   5.595901   2.731265     2.05   0.040     .2427198    10.94908
              8  |   7.199682   3.490074     2.06   0.039      .359264     14.0401
              9  |   8.803464   4.250369     2.07   0.038     .4728943    17.13403
             10  |   -3.99814    3.19066    -1.25   0.210    -10.25172    2.255438
             11  |  -1.005282   2.034104    -0.49   0.621    -4.992052    2.981488
             12  |   1.987577   1.478142     1.34   0.179    -.9095277    4.884681
             13  |   4.980435    2.07901     2.40   0.017     .9056501     9.05522
             14  |   7.973294   3.248033     2.45   0.014     1.607265    14.33932
             15  |   10.96615   4.568065     2.40   0.016      2.01291    19.91939
             16  |   13.95901   5.939256     2.35   0.019     2.318282    25.59974
             17  |   16.95187   7.332965     2.31   0.021     2.579522    31.32422
             18  |   19.94473   8.738423     2.28   0.022     2.817733    37.07172
    ------------------------------------------------------------------------------
    
    . marginsplot,
    Click image for larger version

Name:	Graph.png
Views:	1
Size:	38.2 KB
ID:	1655167

    Attached Files

  • #2
    I think you can get what you are looking for with:
    Code:
    forvalues lz2 = 0(2)16 {
        margins x1, dydx(L.z2) at(L.z2 = `lz2') noestimcheck vsquish pwcompare
    }
    By the way: get rid of the -xi:- prefix on the -reghdfe- command. It isn't actually harming anything because you have no explicit i. prefixes in your variable list, but if you did, it would handle them in a way that would give you wrong results with -margins-. In fact, you should think of -xi:- as effectively obsolete. Although there are still a few Stata commands that don't support factor variable notation, mostly they are old and their functions can be accessed with newer commands that do support factor-variable notation. The rest are kind of exotic and rarely used anyway. So tuck -xi- away into some dusty corner of your mind, and only use it if you get a factor-variable not supported message when you try to run something.

    Comment


    • #3
      Dear Clyde, thanks a lot for the code, it worked perfectly. And I take note about the "xi"

      Comment


      • #4
        Dear Clyde Schechter, let me borrow your knowledge and ask you how can I compare the curve of z2 when x1=0 and when x1=1 (being x1 a dummy variable)? I mean, even though in this model the curve of z2 when x1=1 is not significant (see the table in #1), in other specification it is, and I am interested in knowing if these two curves are statistically different in such a case.
        Could it be something like this?

        Code:
        test (_b[L.z2] + _b[cL.z2 #cL.z2]= _b[1.x1#cL.z2]+ _b[1.x1#cL.z2 #cL.z2])

        Comment


        • #5
          No, that's not right. It would be a joint test of the x1#L.z2 and x1#L.z2#L.z2 terms in the regression:

          Code:
          test x1#L.z2 x1#L.z2#L.z2
          I want to emphasize that this test is applied after the regression, not after -margins-. (Alternatively, remove the -post- option from -margins- and then you can still run tests on the regression coefficients.)

          Comment


          • #6
            Dear Clyde Schechter, thanks for your answer, but I think I do not understand very well. As I see, the code you suggest would be testing if the curve when x1=1 is significant (it is very llikely that I am wrong, but do not see why). But I do not see how this would test for the curve being different when x1=0 from when x1=1.
            For instance, to keep it simple I reestimate a simple model (see below) but I think I am testing if the curve is significant when x1=1, not if it is different from the one when x1=0. In fact, in this case I would say that there is not a quadratic effect when x1=1 as in the opposite case when x1=0. But what would be the test if both curve are significant to test that they are different?
            Code:
            xtreg y x0 x1##(L.c.z2##L.c.z2), fe vce(cluster sic) 
            
            Fixed-effects (within) regression               Number of obs      =     38155
            Group variable: ident                           Number of groups   =      3672
            
            R-sq:  within  = 0.0005                         Obs per group: min =         1
                   between = 0.0043                                        avg =      10.4
                   overall = 0.0013                                        max =        12
            
                                                            F(4,37)            =         .
            corr(u_i, Xb)  = 0.0236                         Prob > F           =         .
            
                                                 (Std. Err. adjusted for 38 clusters in sic)
            --------------------------------------------------------------------------------
                           |               Robust
                         y |      Coef.   Std. Err.      t    P>|t|     [95% Conf. Interval]
            ---------------+----------------------------------------------------------------
                        x0 |   5.61e-06   3.33e-07    16.85   0.000     4.94e-06    6.29e-06
                      1.x1 |   .2643997   .7614395     0.35   0.730    -1.278423    1.807223
                           |
                        z2 |
                       L1. |   1.92e-07   8.99e-08     2.13   0.040     9.60e-09    3.74e-07
                           |
               cL.z2#cL.z2 |  -7.27e-16   3.63e-16    -2.01   0.052    -1.46e-15    7.42e-18
                           |
                  x1#cL.z2 |
                        1  |   1.51e-08   1.77e-07     0.09   0.933    -3.44e-07    3.74e-07
                           |
            x1#cL.z2#cL.z2 |
                        1  |  -3.65e-15   1.78e-15    -2.05   0.048    -7.25e-15   -4.14e-17
                           |
                     _cons |    11.3953   .0943393   120.79   0.000     11.20415    11.58645
            ---------------+----------------------------------------------------------------
                   sigma_u |  15.505644
                   sigma_e |  19.250228
                       rho |  .39349665   (fraction of variance due to u_i)
            --------------------------------------------------------------------------------
            
            . 
            . 
            . 
            end of do-file
            
            . test (_b[1.x1#cL.z2] = _b[1.x1#cL.z2#cL.z2])
            
             ( 1)  1.x1#cL.z2 - 1.x1#cL.z2#cL.z2 = 0
            
                   F(  1,    37) =    0.01
                        Prob > F =    0.9326

            Comment


            • #7
              The test I proposed is not a significance test of the results when x1 = 1. Because it involves only the interaction terms, it is a test of whether the joint marginal effect of Lz2 and its square differ according to the value of x1. That is what interaction terms do. An interaction of x1#whatever does not tell you about effects of x1. It tells you the extent to which the effects of whatever differ according to the value of x1. And in this case, that is the question you are trying to answer.

              As for your proposed
              Code:
              test (_b[1.x1#cL.z2] = _b[1.x1#cL.z2#cL.z2])
              you are testing there whether the difference in "the effect of the linear term for L.z2" between x1 = 1 and x1 = 0 is equal to the difference in "the effect of the quadratic term for L.z2" when x1 = 1 and x1 = 0. I purposely put those "effects" in scarequotes because neither of them is an actual thing. You are testing the equality of two things which are equal only in the sense that both are completely meaningless in their own right and only take on meaning jointly.

              Let me suggest that you refresh your understanding of interaction models by reading the excellent Richard Williams' https://www3.nd.edu/~rwilliam/stats2/l53.pdf. Your model is more complicated than any considered there, because you have quadratic terms, but the same basic principles of interpretation apply, supplemented by the principle that no linear or quadratic term is ever meaningful by itself, and only expressions or tests that use them jointly are valid.
              Last edited by Clyde Schechter; 23 Mar 2022, 13:38.

              Comment


              • #8
                Dear Clyde Schechter, thanks a lot for your help and patience. I will read the document to properly understand interactions. Just very quickly, when implementing your suggestion, it gives error after the estimation from #6. Any hint?
                Code:
                . test x1#L.z2 x1#L.z2#L.z2
                x1#L.z2 not found

                Comment


                • #9
                  Make it:
                  Code:
                  testparm x1#cL.z2 x1#cL.z2#cL.z2

                  Comment


                  • #10
                    Thanks a lot, it worked. Just wanted to tell you that this test is giving me the same result as the one I did in #6 that you explained to me not doing what I wanted. Hope that after reading the document you suggest it will be clear to me. thanks again.

                    Code:
                     testparm x1#cL.z2 x1#cL.z2#cL.z2
                    
                     ( 1)  1.x1#cL.z2 = 0
                     ( 2)  1.x1#cL.z2#cL.z2 = 0
                           Constraint 2 dropped
                    
                           F(  1,    37) =    0.01
                                Prob > F =    0.9326
                    
                    . 
                    
                    . 
                    . test (_b[1.x1#cL.z2] = _b[1.x1#cL.z2#cL.z2])
                    
                     ( 1)  1.x1#cL.z2 - 1.x1#cL.z2#cL.z2 = 0
                    
                           F(  1,    37) =    0.01
                                Prob > F =    0.9326

                    Comment


                    • #11
                      My first reaction was: something must be wrong, because those are different tests.

                      However, looking at your regression results in #6, I think that this is happening because both x1#cL.z2 and x1#cL.z2#cL.z2 are numerically very close to zero. So both of these tests boil down to testing different equations that all reduce to 0 = 0. But if these coefficients had any appreciable size, you would get different results from these commands. (And you would not see "Constraint 2 dropped" in the output from -testparm-.)

                      Looking more closely at the output in #6, I think you should rescale your z2 variable. The coefficients of everything involving z2 are microscopic, which makes them hard to understand, and leads to possibly incorrect results of tests like the ones shown in #10. Some of these microscopic coefficients are, nevertheless, "statistically significant" or close. If you rescale the z2 variable down by a factor of, say, a million or even a billion, (so, if z2 itself is denominated in dollars, change the unit it to millions or billions of dollars , if it's denominated in millimeters, change it to kilometers, etc.) you will get numbers that are more comfortable to work with, and commands that work with the coefficients will not lose precision as readily. Alternatively, scale up y by such a factor. That seems less appealing, however, since you are getting "normal number" coefficients for x1 and the constant term, and scaling up y could make those coefficients gargantuan--which has problems of its own. On the other hand, the results in #1 have "normal numbers" for all of the coefficients, including z2 and its square the interactions. While that is a different model with other variables included, so anything can change in any way, it seems odd that the coefficients differ by orders of magnitude in this way. Did the variables change between #1 and #6?

                      Comment


                      • #12
                        Oh yes, you are totally right. The variable is scaled in #1 but I forgot to do it in the simplest example in #6. I understand now what you mean. Thanks again.

                        Comment

                        Working...
                        X