Announcement

Collapse
No announcement yet.
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Diagnostic tests when applying static estimators to dynamic panel models

    Hi all. I am running some two-way FE regressions using a panel of country-year observations. The baseline model is:

    Code:
    xtreg EPR LK PAU CO i.year, fe cluster(country)
    I use xttest0 and xtoverid on equivalent RE specifications to test for (1) pooled OLS versus random effects and (2) random versus fixed effects. Suppose I estimate a specification that includes a lagged dependent variable whilst using the fixed effects estimator:

    Code:
    xtreg EPR L.EPR LK PAU CO i.year, fe cluster(country)
    My question is this: are xttest0 and xtoverid still appropriate, or does the presence of L.EPR make them nonsensical? For context, in all static models the tests strongly suggest using the FE estimator. However, when adding the lagged dependent variable, xttest0 gives me a p-value of 1.000. Correspondingly, xtoverid says 'saved RE estimates are degenerate (sigma_u=0) and equivalent to pooled OLS'.

    Should I take these tests at face value, or for whatever reason does including a lagged dependent variable mean they generate implausible results? Or is it more likely that the tests indicate that Nickell bias is a problem? Apologies if these are strange or obvious questions, I'm quite new to dynamic panel models. Thank you!

    (Note: I am aware of the risks involved in applying static estimators to dynamic models. However, my panel is large T, small N, and I'd hope FE is suitable. Regardless, GMM results in severe instrument proliferation).
    Last edited by William Law; 27 Feb 2025, 08:09.

  • #2
    Nope. Nickell Bias.

    Comment


    • #3
      Thank you so much for the reply George. Just to clarify: are you suggesting that the estimation strategy in general is not viable because of Nickell Bias? Or just that the Breusch Pagan LM and Hausman tests are not appropriate?

      Comment


      • #4
        Both

        Comment


        • #5
          Thank you, much appreciated!

          Comment


          • #6
            Is your T>30?

            Comment


            • #7
              also see: https://www.statalist.org/forums/for...el-data-models

              Comment


              • #8
                T=30, N=17. I have looked at xtdpdbc before, but fear my regressors don't meet the exogeneity assumption.

                Comment


                • #9
                  If T is truly large, then the Nickell bias can be ignored. However, I am not sure I would classify your sample as "large T", although it is relative to N. To me, this is still more of a "small-N, small-T" setting, which is generally challenging.

                  If you worry about the exogeneity assumption with xtdpdbc, then you should equally worry about the same assumption with xtreg.

                  Because the lagged dependent variable cannot be regarded as strictly exogenous (unless T is really large), the tests you are using are generally not applicable.

                  Given that xtreg was your starting point, xtdpdbc would still be the natural next step, as it retains all the assumptions about the regressors and just adjusts for the predetermined nature of the lagged dependent variable. You can implement both a dynamic FE and RE version with this command. There are also postestimation commands available to test for FE versus RE; see
                  Code:
                  help xtdpdbc postestimation
                  https://www.kripfganz.de/stata/

                  Comment


                  • #10
                    Thank you Sebastian, much appreciated.

                    Comment

                    Working...
                    X