Announcement

Collapse
No announcement yet.
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Pre-treatment test: year fixed effects drops

    Hi,

    I am running STATA to see an effect of an event happened in 2015(and several interaction effects with the treatment effect), and now I am trying to see the pre-treatment effects with code:

    xtpoisson DV year2013 year2014 year2015 control variables i.year, fe vce(robust)

    I am not sure this is the legit way to test pre-treatment effect, but I have seen some papers using this method and learned this method in a policy analysis class.
    The problem is that when I run this model, two year-fixed effects drops, and that means the coefficient of year variables(ex. year2014, year2013) with the dropped year fixed effects are not credible base on what I learned.

    Is there any way I can improve this situation and test pre-treatment effects? I think this kind of issue is not very well-known because most papers only indicate whether a time-fixed effect is included or not rather than showing all the year-fixed effects in result tables.

    Thank you!!


  • #2
    The problem is that when I run this model, two year-fixed effects drops, and that means the coefficient of year variables(ex. year2014, year2013) with the dropped year fixed effects are not credible base on what I learned.
    Well, of course that's what happens. You have year2014 and year2013 each in the model twice, once as stated and the once each in i.year. You can never have the same variable twice in the same regression: it's the ultimate colinearity. Every variable is colinear with itself. Stata recognizes it and just omits something to break the colinearity. It could do that by dropping year2013 and year2014, or some of the i.year indicators. It doesn't really matter.

    But why are you doing this anyway? If you want to know the magnitude of the shocks in 2013 and 2014, just go back to your original regression and look at the coefficients of 2013.year and 2014.year in the output there! Those don't sound like "pre-treatment effects" to me, although admittedly I don't really know what you mean by that phrase. But if what you are interested in could be represented by indicators year2013 and year2014, then you don't need them because they're already there as 2013.year and 2014.year.

    Comment


    • #3
      Thank you so much for your advice, Clyde.

      What I meant by 'pre-treatment effect' is that as I try to use exogenous shock (natural experiments such as natural disaster, institutional change) in my study, I want to show the shock I am using is indeed exogenous.

      The image I have attached is the pre-treatment test I mentioned in the original post. As you can see in the table, the authors test pre-treatment effect by including the variables of two previous years(basic t-1, &basic t-2) of the year of adoption of Electronic Health Record, with the time-fixed effect, which is a year-fixed effect. The authors of this paper conclude that there are no significant pre-treatment effects based on this result. Thus, they argue that doctors are leaving the hospitals due to the adoption of Electronic Health Record because before the adoption the departure of doctors is not significant. I can imagine that a couple of time-fixed effects are dropped from his table, although the table only indicates that time-fixed effects were included.

      What you are saying is that this kind of test cannot be used as a pre-treatment test as STATA will drop the duplicated variables. I wonder if there is any legit way to test pre-treatment effect. As I am studying social science, we have to use natural experiment and persuade skeptical readers that there is indeed an exogenous shock.

      Thank you so much in advance.
      Attached Files

      Comment


      • #4
        I can guarantee you that some of the year indicators were dropped in those analyses. It is mathematically impossible to have all of those variables in the model: it will have two colinearities and be doubly unidentifiable!

        The way you can force Stata to retain your year2013 and year2014 variables is to replace the i.year term in your model with a list of year indicators that leaves out three (the usual 1 plus the two additional omissions needed for your purpose). So, for example, if in your data year ranges from 1999 through 2015, instead of i.year you could say i(1999/2012).year.

        By the way, I would say that the authors failed to demonstrate the absence of shocks in the two preceding years, at least for model (2). Only those who worship the false god of p < 0.5 might be tempted to conclude that. But look at the coefficients for t-1 and t-2. They're 80% and 60% as large as the "Basic" effect. The fact that their p-values are borderline insignificant doesn't mean that there's nothing going on there: that's just making a fetish out of the "magic number" .05. The case is a bit more persuasive for the t-2 effect in model (6), where at least it's only about 45% as large as the "Basic" effect. But that's hardly impressive; I might call that result "moderately suggestive."

        Comment


        • #5
          Thank you, Clyde. I absolutely agree. In conclusion, do you think I should not bother testing pre-treatment effect as the authors did?

          I think even including year variables with omitting year-fixed effects(year2013, year2014) cannot be a good solution for 'pre-treatment test' because the year effect may be attributable to some yearly varying omitted variables, which can generally be controlled by a year-fixed effect. For example, in the table above, the authors assume that the significant effect of 'basic year' is attributable to the adoption of Electronic Health Record. However, as the time-fixed effects for t-1, t-2, and t are dropped, the effects are likely to be attributed to some time-varying omitted variables as we can see from the coefficients in the model (2). Do you think presenting a scatter plot or graph that can show a discontinuity is a better solution?

          Thank you so much. You have been very helpful!

          Comment


          • #6
            Yes, I would find a graph showing a discontinuity more convincing. But perhaps what you need to convince a skeptical audience is to show that you can't also get a discontinuity using some irrelevant year as the cutpoint. I mean, something could be happening in every year that will produce a discontinuity there. So I think it's not a matter of adding more variables to this model. I think it's a question of running the model with a "placebo" replacement for year2015.

            Comment


            • #7
              By a "placebo" replacement for year2015, you mean I should run the model using year2014 or year2013, instead of the year 2015 to see if there is the same treatment effects as well as other interaction effects as in the year 2015?
              My current model is,

              xtpoisson DV after2015 IV after2015*IV control_variables i.year, fe vce(robust)

              So, placebo replacement can be,

              xtpoisson DV after2014 IV after2014*IV control_variables i.year, fe vce(robust)
              &
              xtpoisson DV after2013 IV after2013*IV control_variables i.year, fe vce(robust)

              And then compare the results if previous years show the similar treatment effect of the year 2015?

              Comment


              • #8
                Yes, that's exactly what I had in mind. (Although you need # not * for the interactionsin Stata syntax.)

                Comment


                • #9
                  Thank you so much for all your advice, Clyde. You helped me a lot in designing research for my 2nd year Ph.D. paper! :D

                  Comment

                  Working...
                  X