Announcement

Collapse
No announcement yet.
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Generalized difference-in-differences

    Hello all!

    I am new to Statalist. I appreciate any and all help.

    I have a very specific modeling question. I am evaluating a police department policy in New York City. I have crime data, by month, over a five year period (Jan-2012, Feb-2012, Mar-2012, ..., Nov-2016, Dec-2016, etc.) for all precincts in the city.

    The program was only implemented in specific precincts, and only went into effect during the summer months in a few of these years. For example, in 2015, the program goes into effect in the summer (e.g., June, July, and August), then ends immediately after. Then, it goes back into effect the summer of the following year in 2016 (e.g., May, June, and July), then ends again. Now, I could use the archetypical difference-in-differences (DD) model and run separate regressions by year. This model amounts to the following:

    y_pt = b_0 + b_1 Treatment_p + b_2 Post_t + b_3 (Treatment_p * Post_t) + e_pt

    where y_pt is the crime rate in precinct p and month t. The variable Treatment_p is dummy indexing treated precincts (e.g., 20 precincts comprise the treatment group, and the remaining 50 or so precincts comprise the control group). The variable Post_t indexes the summer months in both treatment and control groups (e.g., dummy equal to 1 for Jun, Jul, and Aug, 0 otherwise). Interacting the two dummies gives us an estimate of b_3, the treatment effect for that year. It is worth noting that I estimate models separately for each year. By doing this, I standardize the post-treatment period. In this setting, I would only be comparing the several months before the intervention (e.g., Jan, Feb, Mar, Apr, and May) with the three months when the intervention is in place (Jun, Jul, and Aug) in each year.

    However, I want to exploit more of the variation across time. Modeling this is somewhat complicated as the timing of the intervention varies a little depending on the year. For example, in 2015, the intervention runs from Jun-Aug; then in 2016, it runs from May-Jul. In addition, there are also different precincts receiving the program depending on the year, though most of the precincts receiving the intervention do stay the same. Then I noticed I could use the more "general" DD approach popularized by Bertrand et al. (2004):

    Outcome_pt = Group Fixed Effects + Time Fixed Effects + delta*Policy_pt + e_pt

    This involves including a full set of “precinct” effects (dummies for each precinct), a full set of “year” effects (dummies for each year), and a dummy for when the policy was actually in effect (Policy_pt). If all assumptions are met, then the variable Policy_pt would "turn on" in Jul-2015, Jul-2015, and Aug-2015 (first wave of the intervention), then off, then back on again in May-2016, Jun-2016, and Jul-2016 (second wave of the intervention, and so on).

    My main question: Is the “generalized” DD model amendable to a “policy dummy” that turns on and off over the full 'month-year' panel data series? Or, when a program/policy variable is turned on (Policy_pt = 1), must it stay turned on for the rest of the panel series (i.e., program in effect) for the “generalized” DD to work?

    Also, is the inclusion of year effects (i.e., year dummies) appropriate in this context? The intervention is only in effect during the summer months in each year, and so I wonder if year fixed effects is appropriate, since the intervention is only going to vary over specific months in a given year.

    And finally, some papers employing the basic DD approach include a “pre-period” mean of the outcome variable on the right-hand side of the basic DD model. They argue that this “controls” for regression to the mean. Konda et al. (2016) used this in their paper investigating the effects of vacant lot 'greening' on crime. This could be useful for my study due to the cyclical crime patterns observed in the data.

    Anyway, I know that was a lot. Please let me know if I have been unclear!

    Thank you in advance!

    Respectfully,

    Tom

  • #2
    Outcome_pt = Group Fixed Effects + Time Fixed Effects + delta*Policy_pt + e_pt
    Not quite right. By your description, different precincts were involved in different years. So there is no "group" because a precinct might be in the "treated" group one year and not the next. So instead of Group Fixed Effects you should use precinct fixed effects.

    My main question: Is the “generalized” DD model amendable to a “policy dummy” that turns on and off over the full 'month-year' panel data series? Or, when a program/policy variable is turned on (Policy_pt = 1), must it stay turned on for the rest of the panel series (i.e., program in effect) for the “generalized” DD to work?
    No, it does not need to stay turned on. In fact, it must not do so. In this model, the Policy_pt variable must be 1 in exactly the observations (combination of precinct and month) when that precinct has the policy active in that month, and 0 in all other situations.

    Also, is the inclusion of year effects (i.e., year dummies) appropriate in this context? The intervention is only in effect during the summer months in each year, and so I wonder if year fixed effects is appropriate, since the intervention is only going to vary over specific months in a given year.
    It must be month, not year here, because the policy turns off and on in months.

    And finally, some papers employing the basic DD approach include a “pre-period” mean of the outcome variable on the right-hand side of the basic DD model. They argue that this “controls” for regression to the mean. Konda et al. (2016) used this in their paper investigating the effects of vacant lot 'greening' on crime. This could be useful for my study due to the cyclical crime patterns observed in the data.
    In a model with precinct-level fixed effects (see above), the pre-period mean will be colinear with those fixed effects and will drop out of the model. The use of the precinct level fixed effect will automatically adjust for all time invariant attributes of the precincts, including the pre-period mean crime level.

    One other thought I have, though you may well have already considered this. This kind of model assumes that when the policy is implemented, there is an immediate effect on the outcome variable, and that when the policy is discontinued, the outcome immediately reverts to the pre-policy level. Is that realistic? Is there no delay in the response to the policy, perhaps due to gradual dissemination of implementation, or part of the response relying on community awareness of the policy change? Similarly, on the other end: might there not be a "washout period" before things return to pre-policy levels? (If such a lag period is substantially shorter than the one-month time unit that characterizes your data, then this is a non-issue. But if it is a large fraction of a month, or even longer, than this approach to modeling may be insufficient.)

    Comment


    • #3
      Thank you Clyde!

      Excuse me. I meant "Precinct Fixed Effects." Using the word "group" is a bit of a misnomer in this context. I have more questions if you don't mind entertaining them.

      So to be clear, a full set of dummies for each "precinct" (irrespective of group, i.e., treatment/control) is representative of "precinct" fixed effects?

      Also, you mentioned that a series of "month" dummies is more appropriate in this context; I thought so myself since the intervention is in place for only a few months in each year. Now, if I stack all these monthly observations (5 years) on top of each another, would this mean I have 60 month dummies (12 months in each year)? Also, would this be my "time" fixed effects in the more "general" DD model I have outlined earlier?

      And thank you for clearing up the fact that the Policy_pt dummy does not need to stay turned on. Policy_pt = 1 during combinations of month and precinct when the policy is in effect, yes. So, for example, in a given year Policy_pt is turned off (e.g., Jan-May), then turns on during the summer months (e.g., Jun-Aug), then turns off in the month after (e.g., Oct, Nov, and Dec). Is this okay for those specific precincts that are treated? I suppose this is what I meant by turn "on and off" in the "general" DD model. I hope that makes sense.

      And finally, is it proper to call the "generalized" DD model a fixed effects model, since it has a dummy for each precinct (similar to the least squares dummy variables [LSDV] estimator? I know it is traditionally called a 'two-way' fixed effects estimator. Is it more appropriate to refer to it as this? hope I am not getting caught in the weeds with this one.

      As for your question to me. These policies have a tendency (though not always) to be strong initially, then decay thereafter. I am actually trying to assess this when I begin running these models

      Thank you for all your help. Means a lot!

      -Tom

      Comment


      • #4
        So to be clear, a full set of dummies for each "precinct" (irrespective of group, i.e., treatment/control) is representative of "precinct" fixed effects?
        Correct.

        Also, you mentioned that a series of "month" dummies is more appropriate in this context; I thought so myself since the intervention is in place for only a few months in each year. Now, if I stack all these monthly observations (5 years) on top of each another, would this mean I have 60 month dummies (12 months in each year)? Also, would this be my "time" fixed effects in the more "general" DD model I have outlined earlier?
        Well, you have 60 months, so there are 60 monthly indicators ("dummies"), but only 59 will enter the model--Stata will omit one of them to avoid colinearity. Let's just get into the details a little bit here. I often see here people working with data sets that have a variable for month (1 to 12) and another variable for year (whatever the range is). That is not satisfactory for these purposes. You need a single variable that reflects both the month and the year, so it has 60 separate values (e.g. Jan 2012 is one value, and April 2013 is another.) If you do not have such a variable, create it using one of Stata's datetime conversion functions from whatever you do have. Also the variable must be a Stata internal format numeric variable, not a string that reads as a month-year combination to human eyes. So you may need to immerse yourself in the details of -help datetime- to work this out. Or, perhaps you already have this.

        Once you have a Stata internal format numeric variable expressing the month-year combination, you will want to use factor-variable notation to enter it into the estimation command. If you are not familiar with factor-variable notation, all the details can be found in -help varlist-. But it's actually quite simple. If the variable is called monthly_date, you enter it into the regression as i.monthly_date.

        And thank you for clearing up the fact that the Policy_pt dummy does not need to stay turned on. Policy_pt = 1 during combinations of month and precinct when the policy is in effect, yes. So, for example, in a given year Policy_pt is turned off (e.g., Jan-May), then turns on during the summer months (e.g., Jun-Aug), then turns off in the month after (e.g., Oct, Nov, and Dec). Is this okay for those specific precincts that are treated? I suppose this is what I meant by turn "on and off" in the "general" DD model. I hope that makes sense.
        Makes sense, and is correct. In addition, for those precincts that never get treated, this values is 0 in every month.

        And finally, is it proper to call the "generalized" DD model a fixed effects model, since it has a dummy for each precinct (similar to the least squares dummy variables [LSDV] estimator? I know it is traditionally called a 'two-way' fixed effects estimator. Is it more appropriate to refer to it as this? hope I am not getting caught in the weeds with this one.
        Since you will be including both precinct and month indicators in this model it is a two-way fixed effects model (as a generalized DID analysis must be.) I think the term "fixed-effects model" refers, indifferently, to a model with just precinct indicators, or to a model like yours with both precinct and month. Probably when you write up your methods you will want to, at least initially, call it a two-way fixed effects model, to keep things clear and specific. After first mention, though, trimming that down to just "fixed effects model" might help you squeeze past a tight word limit, with no misunderstanding created thereby.



        Comment


        • #5
          Great stuff!!! I already concatenated the month and year variable into a datetime variable (i.e., month-year) that I can easily convert into a series of month dummies across all years.

          Also, should I get my hands on a "continuous" treatment (dosage such as a measure of police "strength" or arrest severity, etc.), could it replace the Treatment_p indicator in the basic DD model outlined above, or the Policy_pt variable in the more 'general' DD setting? A continuous treatment variable would obviously show some higher intensity during the intervention period, so I feel it could work well.

          And what is your opinion on the inclusion of 'controls' in a model like this. Obviously, there is wide variation 'across precincts' with respect to square mileage, population density, demographic composition, etc. These controls are either time-invariant, or don't change much 'across time' for my panel series. So there is obvious variation across these jurisdictions, but not through time. Should I use the basic DD model, or the 'two-way' fixed effects approach, do you feel these controls are worth including? The basic DD model is similar to the standard fixed effects estimator.

          I only ask because I am comparing 'treated' jurisdictions with 'non-treated' jurisdictions that are located in very different parts of the city, some in different geographic boroughs.

          Thanks again!!!

          Comment


          • #6
            If you have a continuous treatment intensity variable, so long as it is scaled in such a way that non-treatment is the same thing as 0 intensity, then yes, you can use it the same way as a discrete treatment variable. Remember that there is no "treatment" variable in the regression model in generalized DID (as opposed to classical DID where there is such a variable.) In the generalized DID model, we have fixed effects for precincts and months and then a single variable which is 0 or 1 depending on whether treatment is "on" or "off" in that precinct in that month. With a continuous variable, the Policy_pt variable is just 0 if the policy is off in that precinct in that month, and equal to the intensity when the policy is "on" in that precinct in that month.

            As for covariates, the ones you mention are all invariant within a precinct over time, so there is no possibility of including them in this fixed effects model, and there is also no need to include them as their effects are already adjusted for by the precinct indicators themselves. If you have time-varying covariates, those could be worth including if they are associated with the crime rate outcome you are model. (But don't over do it: I believe that there are only 77 precincts in the NYPD, so you can't squeeze too many predictors into the model without overfitting the noise.)

            If you were to revert to the classic DID model (though, given that the intervention goes on and off within precincts and different precincts may be treated in different years, I don't see how you can actually do that), then these same covariates could be included if you did not use a fixed-effects regression for your analysis but used OLS instead. But, frankly, it seems to me that using a fixed effects regression is better because it will automatically adjust for these covariates without any additional work on your part, and, for free, it also adjusts for other time-invariant precinct attributes that you don't happen to have any data on, and even for other time-invariant precinct attributes that you haven't thought of!

            Comment


            • #7
              Good information.

              I could use the classic DD but only for one year at a time at a time. In other words, index my treatment/control in that one year, then index pre-period months Jan - May (Post = 0) and post-period months Jun - Aug (Post = 1) in that year. This isn't ideal, but that's why I wanted to exploit more of the variation over time with the "general" DD model.

              Just to be clear. The continuous treatment can simply equal 0 when the intervention is not in effect, then be equal to the 'precise intensity' when it is in place?

              What I notice is that for a treatment intensity variable (e.g., officers assigned to a geographic region), this usually stays the same over the intervention period. So there is variation in the number of officers across precincts, but not much variation over the intervention months within a single precinct. So for months before the intervention the intensity is 0, then it may jump to 35 and stay that way while the intervention is in effect.

              Due to this lack of variation over the treatment months for a given precinct, is this worth exploring?

              Then I noticed a similar concern addressed by another Statalist member. He/she shared a model used in a paper by Acemoglu, Autor, and Lyle (2004):

              yist = δs + γd1950 + X′istβ + φ(d1950⋅m) + ϵist

              I could technically employ a similar method. I could take my post-period months and interact them with the intensity variable. I could interact it with a dose variable that could reflect the number of officers assigned to a precinct. However, it wouldn't always be equal to zero; it would vary from month to month. Does the intensity variable have to equal 0 for non-treated precincts for this method to work?

              Hope that makes sense!

              Comment


              • #8
                The variable needs to reflect reality as closely as possible. If the intervention being off is, in the real world, the equivalent of having 0 officers assigned to an area, then that reality justifies using 0 for the intensity variable in the off-months. If they are not real-world equivalent, then, no, you can't use that approach. The ordinary meaning of the word intensity suggests that 0 intensity and no action being taken are the same thing. But if that's not true in your situation, then you can't go that route.

                I don't think an analysis based on post-period months is sensible for your situation, because your intervention turns on and then off again. So if the intervention was on from July through September, in October you would be in post-period month 4, but really you're in an off period. Also, you said that the intervention lasts for a different number of months in different precincts, no? So that means that post-period 3 could be an on month in one precinct but an off month in another. The basic problem is that "post" really isn't definable here because of the on/off phenomenon and the variation in duration of the intervention.

                By the way, when providing references, please provide complete citations that would enable a reader to actually find the article. It may be that Acemoglu, Autor, and Lyle (2004) is folklore in your discipline and everybody there knows what you're talking about. But this is an international inter-disciplinary forum and many readers will have no clue. So either provide complete citations, or links to an ungated copy of the work in the future.

                Comment


                • #9
                  A "post" period is well defined. Or, at least I would say so.

                  In a given year, say 2014, all precincts receiving treatment got it for 3-months exactly (the same amount of time). In the next year, say 2015, a different group of precincts (thought most were the same) received the intervention for another 3-month period (all precincts in that year received it for a 3-month period). So, from year-to-year, all precincts get the intervention over the same time period (3 months), it just starts at a different time depending on the year.Does that make sense?

                  In other words, in a given year (12-month period) there is a strict 3-month post-period (Jun, Jul, and Aug) for 20 jurisdictions. Then next year, the post-period is another strict 3-month period (Jun, Jul, Aug) for another 20 jurisdictions (mostly same precincts but some are different). Standard classical DD models can technically be used in this case, right? They just have to be restricted to one year only.

                  I will try and see how I could work the continuous treatment into such a model





                  Comment


                  • #10
                    Maybe. If you are confident that in the months subsequent to the intervention (whichever those may be) there are no lingering effects of the intervention that need to "wash out" then, yes, you can do this for any one year. I wouldn't call the variable "post," though because it really doesn't demarcate a pre and post- interval. It demarcates an off interval that is pre, an on interval in the middle, and another off interval that is after the intervention starts. A better name might be just "on_off" or something like that. Stata won't care what you call it, of course, but using suggestive names makes it easier for other to follow your code, and also easier for you to remember what you did when you come back to this after an absence (as almost invariably happens in research).

                    Comment


                    • #11
                      This is all good stuff.

                      Just to put the “general” DD concerns to bed, can the dummy in the more general setting model treatment that begins at different times, and when treatment group composition changes from year-to-year? I only ask because the “treatment group” is mostly the same each year, but some new precincts get added, while others get removed.

                      To illustrate this simply, every year the intervention may begin in the 9th, 14th, and 22nd Precincts, then in the summer of the following year the program would include the 9th, 14th, and 32nd Precincts. As long as I index those combinations of month-year (Policy_pt = 1) for those treated units across the full panel, then the general DD can work? Or should should I use the “always treated” jurisdictions that receive the program in every year?

                      Thank you again! This has been super helpful.

                      Comment


                      • #12
                        In the generalized DID model of your multi-year data, the Policy_pt variable should be 1 in all and only those observations where that precinct has the intervention "on" in that month, and 0 in all others. There is no distinction between precincts that get treated every year and those that only get treated in some years. There is, in fact, from the perspective of this model, no such thing as a treatment group. There are just precincts that may go on and off treatment, and the Policy_pt variable is calculated to reflect that.

                        Comment


                        • #13
                          Thank you for all your help Clyde. My study is coming along well.

                          Another concern involves some robustness checks on the models. As I mentioned earlier, treatment effects may grow, or decay, over time. Because of this, I am experimenting with more dynamic models. For example, I thought it prudent to include leads and lags of the "treatment variable" (Policy_pt) on the right-hand side of my equation. My intuition was to have one lead and two lags of the policy indicator. Is this simply a monthly shift of the policy variable? For example, the static model has a policy dummy equal to 1 during the months June, July, and August for specific precincts in a specific year, 0 otherwise (same as before). The lagged version would just shift the dummy one/two months (e.g., July, August, September AND August, September, October). Is my intuition correct?

                          Also, is it appropriate to specify a dynamic model in the following way:

                          y_pt = precinct fixed-effects + time fixed-effects + Policy_pt*delta_1 + Lead_1*delta_2 + Lag_1*delta_3 + Lag_2*delta_4?

                          This seems simple enough to achieve in Stata. Should I simply use Stata's time series operators?

                          Thank you as always!!!

                          Comment


                          • #14
                            As robustness checks, I would do these separately. And I would not include the Policy_pt variable in the model. The idea of a robustness check is to substitute some other time configuration for "the real thing" and see if the effect disappears. So I would do one model with Lead 1, another one with Lag 1, and another one with Lag 2.

                            If the goal here is to incorporate the possibility that the effect grows or decays, that is a different matter from robustness checks. In this case, you could incorporate both the original Policy_pt term and some lags if you want to capture persistence of the effect after the intervention itself ends. Actually, what I would probably do, to make it easier to interpret the results, is to use the original Policy_pt term and then have another term for just September, and another one for just October. That way you would get clean separate estimates of the extent to which the effect carries over in September and October. There are many variations on this theme, and the danger here is that you will just keep trying an endless string of models and end up churning the noise in the system. So you need to have a well-thought out mechanistic understanding of how the intervention is supposed to lead to its effects, and from that you should deduce the extent to which there would be growth, or decay, or persistence after the intervention ends, or possibly even an anticipatory effect. Then build a model that reflects that understanding. Don't just churn a lot of models for this. And, again, this is very different from robustness checks.

                            Comment


                            • #15
                              Good information. Yes, this would be a separate model altogether.

                              My plan is to estimate leads and lags in a completely separate "two-way" fixed effects model. Ultimately, my goal would be to plot these estimates (i.e., event study).

                              Could a lead, Policy_pt, and some lags be appropriate for one model? If so, when Policy_ptis included, isn't it considered the "immediate effect" of the policy on crime?

                              Comment

                              Working...
                              X