Announcement

Collapse
No announcement yet.
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Testing the if two coefficients are significantly different from one another in non-nested contexts

    Hello,

    I would like to test if two coefficients are significantly different from one another.

    Does anyone know how I can do this?

    Take the following example: the data generating process is the following

    y= x+z+e

    y is my dependent variable
    x is my main regressor of interest
    z is a control and
    e is the error term.

    Unfortunately, I do not have an exact measure of x. Rather in my dataset I have x1, that is x measured by subject 1, and x2 that is x measured by subject 2.
    For reason of design, I cannot combine x1 and x2 into a single variable.


    Hence I can estimate the following regression

    y=x1+z
    and
    y=x2+z

    I want to test whether the coefficients of x1 and x2 are significantly different from one another. Does anyone know how to do this?

    this is an example built from the auto.dta dataset

    Code:
    sysuse auto.dta
    rename price y
    rename gear_ratio x1
    rename weight z
    gen x2 = log(x1)*3
    keep y x1 x2 z
    
    reg y x1 z,robust
    eststo est_x1
    reg y x2 z,robust
    eststo est_x2

    how can I test the difference between x1 and x2?

    thanks a lot in advance for your help


    Best

  • #2
    First estimate with conventional standard errors, then

    Code:
    help suest

    Comment


    • #3
      Thanks,
      would this work also in post estimation of ivreg. Indeed, in the real data, I am comparing two 2SLS estimates.

      If i run my iv regressions and then the following command

      Code:
      suest est_x1 est_x2, noomitted
      I get this error message: Do you know what may be causing it?

      "unable to generate scores for model est_x1
      suest requires that predict allow the score option"

      Last edited by Tom Ford; 15 Nov 2021, 11:28.

      Comment


      • #4
        No, ivregress is not supported. However, see this https://www.statalist.org/forums/for...ferent-samples

        Comment


        • #5
          Thanks, this is interesting however it test the equality of the same regression over two different samples.
          In my case, I have two different regressions: one with x1 and the other with x2.
          Is there any way to test the equality of two coefficients from two non-nested regressions?

          Comment


          • #6
            Bootstrapping is always an option for such a case but I wonder how valid such a result would be. The general idea is to write a wrapper program that runs both regressions, stores the coefs of interest and then simply compute the difference. Then you create a confidence interval for this difference statistic-
            Best wishes

            (Stata 16.1 MP)

            Comment


            • #7
              Originally posted by Tom Ford View Post
              Thanks, this is interesting however it test the equality of the same regression over two different samples.
              In my case, I have two different regressions: one with x1 and the other with x2.
              Is there any way to test the equality of two coefficients from two non-nested regressions?
              The procudure highlighted follows what suest does. As suest has no issues dealing with with cases such as yours, so will not the procedure. However, you need some data transformations beforehand. No data example on your part, which limits the extent of help that I can offer. See FAQ Advice #12 for hints on how to create a reproducible example.
              Last edited by Andrew Musau; 16 Nov 2021, 03:18.

              Comment


              • #8
                In case someone is interested I found the following approach from:
                Paternoster, R., Brame, R., Mazerolle, P., & Piquero, A. R. (1998). Using the Correct Statistical Test for the Equality of Regression Coefficients. Criminology, 36(4), 859–866.

                They argue that it is possible to test the significant difference between two coefficients using the following z score:
                Click image for larger version

Name:	Screenshot 2021-11-16 at 11.13.51.png
Views:	2
Size:	10.9 KB
ID:	1636740

                where b1 and b1 are my two coefficients of interest and SEb1 and SEb2 are the associated standard errors.

                I hence implemented the formula in Stata this way:

                Code:
                sysuse auto.dta
                rename price y
                rename gear_ratio x1
                rename weight z
                gen x2 = log(x1)*3
                keep y x1 x2 z
                
                
                * first run both regression and store coefficients and standard errors
                reg y x1 z,robust
                eststo est_x1
                
                matrix list e(b)
                scalar b1 = e(b)[1,1] 
                di b1
                matrix list e(V)
                scalar v1 = e(V)[1,1] 
                scalar se1 = sqrt(v1) 
                di se1
                
                
                reg y x2 z,robust
                eststo est_x2
                
                matrix list e(b)
                scalar b2 = e(b)[1,1] 
                di b2
                matrix list e(V)
                scalar v2 = e(V)[1,1] 
                scalar se2 = sqrt(v2) 
                di se2
                
                
                
                * I use Clogg et al. 1995 an Paternoster et al 1998 to test significant difference in coefficients. 
                
                scalar z1 = b1-b2/sqrt(se1^2+se2^2)
                scalar pv1=2*(1-normal(abs(z1)))
                di pv
                * this is my P value!
                Attached Files

                Comment


                • #9
                  Tom Ford, there is small error in the last lines of your code. There should be a parenthesis around b1-b2:

                  Code:
                  * I use Clogg et al. 1995 an Paternoster et al 1998 to test significant difference in coefficients.
                  
                  scalar z1 = (b1-b2)/sqrt(se1^2+se2^2)
                  scalar pv1=2*(1-normal(abs(z1)))
                  di pv1
                  * this is my P value!

                  Comment


                  • #10
                    I'm surprised this is not built into Stata? Partially because the suest command doesn't work with robust standard errors, if I'm not mistaken.

                    Comment


                    • #11
                      not quite - the original estimates may not use robust SE's, but you can certainly add "vce(robust)" to the suest command

                      Comment


                      • #12
                        Thanks for letting me know!

                        Comment

                        Working...
                        X