Announcement

Collapse
No announcement yet.
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Comparing regression coefficients

    Hi all,

    I wish to compare the coefficients of ESG_within between two models to see if they are any different.

    Code:
    xtreg ctfp ESG_within ESG_mean $control, re cluster(iso3)
    to
    Code:
    xtreg ctfp ESG_within ESG_mean $control, fe cluster(iso3)
    How would I do this?

    Kind regards,
    Maarten

  • #2
    I think this specific problem is solved with the Hausman test, see: https://www.stata.com/manuals/rhausman.pdf
    Best wishes

    (Stata 16.1 MP)

    Comment


    • #3
      Hi Felix,

      My random effects model is according to Bell & Jones (2015), an adjusted random effects model which should be identical to a fixed effects model, but offer more flexibility. It is Mundlak's approach, who stated that Hausman is not a test to base model choice on, but merely a test which allows you to see if an assumption for re has been broken. As the Mundlak method pretty much tries to get rid of the hausman idea, I would like to test them in another way, as I pretty much want to robustness test my RE model against the FE model.

      Comment


      • #4
        I for now have applied the following method, but was wondering if this is a correct way of testing:
        Code:
         * Run the FE model and extract coefficients and standard errors
        xtreg ctfp ESG_within ESG_mean $control, fe cluster(iso3)
        estimates store fe_model
        
        * Extract FE coefficients and standard errors
        scalar fe_coef = _b[ESG_within]
        scalar fe_se = _se[ESG_within]
        
        * Run the RE model and extract coefficients and standard errors
        xtreg ctfp ESG_within ESG_mean $control, re cluster(iso3)
        estimates store re_model
        
        * Extract RE coefficients and standard errors
        scalar re_coef = _b[ESG_within]
        scalar re_se = _se[ESG_within]
        
        * Compute the difference in coefficients
        scalar diff = fe_coef - re_coef
        
        * Compute the standard error of the difference
        scalar se_diff = sqrt(fe_se^2 + re_se^2)
        
        * Compute the z-statistic
        scalar z_value = diff / se_diff
        
        * Compute the two-tailed p-value
        scalar p_value = 2 * (1 - normal(abs(z_value)))
        
        * Display the results
        di "Difference in ESG_within coefficients: " diff
        di "Standard error of the difference: " se_diff
        di "Z-statistic for the difference in ESG_within coefficients: " z_value
        di "P-value for the difference in ESG_within coefficients: " p_value

        Comment


        • #5
          Maarten:
          as I read Felix's helpful reply, you should be more interested in which estimator is right for your dataset (-fe- or -re-?).
          1) If -re- is te way to go, -fe- is still consitent but inefficient;
          2) if -fe- is the way to go, -re- is inconsistent.
          That said, in addition to -rhausman-, you may want to consided the Stata community-contributed module -xtoverid-. Please note that, being a bit aged, -xtoverid- does not support -fvvarlist- notation (see -xi:- prefix as the usual workaround).
          Kind regards,
          Carlo
          (StataNow 18.5)

          Comment


          • #6
            Hi Carlo. I fully understand the difference between fixed effects and random effects. However, by following an adjusted Mundlaks approach, the inconsistency of -re- would be eliminated, and would offer more fexiblity and information than an -fe- does. Also see Bell, A., & Jones, K. (2015). Explaining fixed effects: Random effects modeling of time-series cross-sectional and panel data. Political Science Research and Methods, 3(1), 133-153.

            My goal now is to compare the two coefficients of both models with each other, to see if this method is correct. When performing the code I have done as above, the p value of the test that the difference between the coefficients of both models is different from zero is 0.94. I was wondering if this is a correct way of comparing them.

            Thanks in advance,
            Maarten

            Comment


            • #7
              Maarten:
              the following thread might be interesting to elaborate on Comparing coefficients across two models (same X data, but slightly different Ys) - Statalist
              Kind regards,
              Carlo
              (StataNow 18.5)

              Comment


              • #8
                Originally posted by Maarten Loomans View Post
                Hi Carlo. I fully understand the difference between fixed effects and random effects. However, by following an adjusted Mundlaks approach, the inconsistency of -re- would be eliminated, and would offer more fexiblity and information than an -fe- does.
                There is no evidence that you have estimated a correlated random effects (Mundlak) regression based on what you have shown. What you have presented are random effects and fixed effects regressions. In this case, as recommended, you need a Hausman test to justify using the RE model. If you are using StataNow 18.5 or later, the xtreg command has a -cre- option that allows you to estimate a CRE model. Afterward, the -estat mundlak- will perform a Mundlak specification test, should you wish to choose RE over CRE/FE. Unlike the Hausman test, this test is valid when the original estimates include cluster-robust standard errors. More discussion of CRE estimation in Stata here: https://www.stata.com/statanow/corre...effects-model/


                I wish to compare the coefficients of ESG_within between two models to see if they are any different.


                Code:
                xtreg ctfp ESG_within ESG_mean $control, re cluster(iso3)
                to

                Code:
                xtreg ctfp ESG_within ESG_mean $control, fe cluster(iso3)
                Also, this does not make sense if RE is inconsistent. If RE is consistent, the difference between FE and RE coefficients is not systematic. So, such a test is pointless in my opinion.
                Last edited by Andrew Musau; 09 Oct 2024, 06:39.

                Comment

                Working...
                X