Announcement

Collapse
No announcement yet.
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • chow test vs suest

    I am trying to examine the difference between the coefficients of two similar regressions with absence and presence of a particular variable. I did run the chow test to compare the coefficients and it show significant differences.

    Following is the code that i run.
    Code:
    tobit y x controls i.year i.sic3 if comp_d==1, ll vce(cluster gvkey)
    tobit y x controls i.year i.sic3 if comp_d==0, ll vce(cluster gvkey)
    gen x_comp1=x* comp_d
    tobit y x comp_d x_comp1  controls i.year i.sic3, ll vce(cluster gvkey)
    test _b[x_comp1]=0, notest
    test _b[comp_d]=0, accum
    here is the output that i get
    Code:
     ( 1)  [model]x_comp1 = 0
     ( 2)  [model]comp_d = 0
    
           F(  2,  7088) =    6.16
                Prob > F =    0.0021
    However, when i try the same test using suest, the results are totally different (insignificant).

    Here is the code for suest
    Code:
    tobit y x controls i.year i.sic3 if comp_d==1, ll
    est sto comp1
    tobit y x controls i.year i.sic3 if comp_d==0, ll
    est sto comp2
    suest comp1 comp2, cluster(gvkey)
    test ([comp1_model]_b[x] = [comp2_model]_b[x])
    Here is the output
    Code:
     ( 1)  [comp1_model]x- [comp2_model]x= 0
    
               chi2(  1) =    0.87
             Prob > chi2 =    0.3522
    I can not understand the difference, which one is the correct way to test the difference between the two coefficients from two different regressions with the same model?
    Last edited by Asad Rind; 23 Mar 2024, 10:44.

  • #2
    The first test is a joint test that coefficients are each equal to 0 and the second test is a test of the difference in coefficients. There is a difference, look at the degrees of freedom in the tests.

    Comment


    • #3
      Thanks alot Andrew Musau. That means the second test is correct to test the difference in coefficients. Is there any other effective way to test the coefficient differences using the Chow test?

      Comment


      • #4
        Code:
        tobit y i.comp_d##(c.x controls i.year i.sic3), ll vce(cluster gvkey)
        Then just inspect the output in the regression table for 1.comp_d#c.x.

        Note: Each of the "controls" must be prefixed with c. or i., according to whether it is a continuous or discrete variable, for this to work properly.

        Strictly speaking, neither this, nor any of the things you have done, is a Chow test. The Chow test, developed in 1960 by Frank Chow, was a simple calculation that could be used to compare coefficients across two OLS linear regressions using only the sums of squares reported in the regression output. The point is that it could be done manually working from the regression output already in hand without having to run anything more on the computer. Hard as it is to believe today, back in 1960, people tried to minimize their use of computers because computer time was expensive* and there was all the rigmarole of punching Hollerith cards and submitting a deck of them to the system operators and waiting for your job to make its way through the queue.

        Over time, the term Chow test has been generalized to refer to any test of equality of coefficients in two different models (not restricted to OLS regressions). For some reason that I do not grasp, however, people seem not to apply the term to doing the same thing using the -suest- approach.

        *If I recall correctly, the going rate back in 1964 was $900 per hour on an IBM 7090. That machine had about 150kB (no, that's not a typo) of memory. And don't forget about all the inflation since then: $900 was real money back then. It was roughly a year's tuition at a top-tier private university.

        Comment


        • #5
          You can use either suest + test or interactions to test differences in coefficients.

          Code:
          sysuse auto, clear
          
          *SUEST + TEST
          regress mpg weight turn if foreign
          est sto m1
          regress mpg weight turn if !foreign
          suest m1 .
          test _b[m1_mean:weight]=  _b[_LAST_mean:weight]
          
          *INTERACTIONS
          regress mpg i.foreign##(c.weight c.turn), robust
          Res.:

          Code:
          . test _b[m1_mean:weight]=  _b[_LAST_mean:weight]
          
           ( 1)  [m1_mean]weight - [_LAST_mean]weight = 0
          
                     chi2(  1) =    1.01
                   Prob > chi2 =    0.3155
          
          . 
          . 
          . 
          . *INTERACTIONS
          
          . 
          . regress mpg i.foreign##(c.weight c.turn), robust
          
          Linear regression                               Number of obs     =         74
                                                          F(5, 68)          =      39.72
                                                          Prob > F          =     0.0000
                                                          R-squared         =     0.7197
                                                          Root MSE          =     3.1739
          
          ----------------------------------------------------------------------------------
                           |               Robust
                       mpg | Coefficient  std. err.      t    P>|t|     [95% conf. interval]
          -----------------+----------------------------------------------------------------
                   foreign |
                  Foreign  |   43.91567   19.45286     2.26   0.027     5.098091    82.73324
                    weight |  -.0047949   .0009908    -4.84   0.000     -.006772   -.0028178
                      turn |  -.2603489   .1441738    -1.81   0.075    -.5480432    .0273455
                           |
          foreign#c.weight |
                  Foreign  |  -.0025426   .0026247    -0.97   0.336    -.0077801    .0026949
                           |
            foreign#c.turn |
                  Foreign  |  -1.114203   .6737146    -1.65   0.103     -2.45858    .2301732
                           |
                     _cons |   46.52161   3.851928    12.08   0.000     38.83521    54.20801
          ----------------------------------------------------------------------------------
          The test statistics have different distributions under the null hypotheses, but the p-values (highlighted in blue) are equivalent.

          Comment

          Working...
          X