Announcement

Collapse
No announcement yet.
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #16
    Yes, if you have a lead, Policy_pt, and some lags, you are modeling, respectively, an anticipatory effect, the immediate policy effect itself, and then persistent effects after the intervention ends.

    Comment


    • #17
      Good information. I have another question.

      What is your opinion on using population data across a much more expansive time-series? For example, I normalize the crime data I have for precincts by their respective population sizes. However, most of my data is from 2012- 2016. Population data is from the American Community Survey (ACS). Should I use population data from 2010, or estimates from 2015? This might seem trivial for only 5 years of data, but could be more of a concern should I acquire more longitudinal data.

      Any thoughts?

      Comment


      • #18
        That's a complicated question and I'm not sure of the answer.

        The decennial census data is based on an actual (attempted) count. It is well known that there are significant undercounts in areas where there are large concentrations of racial minorities and poor people. New York has many such areas. The estimates are based on population models that the Census bureau uses. Their models are very well developed and are actually probably more accurate than the count-based decennial census, at least when applied to large areas like the larger states or the entire country and for years not that far from the last decennial census. But my understanding is that the models are not as good in small areas, and police precincts would, I imagine, be small areas. So I really don't know what to tell you. This issue does not come up very often in my own work, and when it does, I consult others with more demographic expertise about what to do. There are demographers who are active on this Forum and perhaps one of them will chime in.

        Comment


        • #19
          Interesting, Clyde.

          As for my final model (see post 14), I tried several models including leads and lags. I found rather stable results using the original Policy_pt variable, one lead, and two separate dummies for October in 2014, and September in 2015, so show persistent effects. These are dummies for one month only, the month after the intervention takes place (in one year the intervention starts a month earlier so they are different).

          My last concern is interpretability of the estimates. Is a single dummy in the one month after the program interpreted as the persistent, lasting deterrent effect beyond the intervention phase? Does interpretation of the original Policy_pt change with the incorporation of these lagged treatment variables?

          And finally, I did try many leads and many lags (I obviously got carried away). It seems when I include too many lags in one equation the results become less stable (significant effects with one lag, but they wash away as I add more). Is this because the intervention period is just "too short" (i.e., three months), or am I just picking up a lot of noise?

          It may be hard to say at this point, but your thoughts are always helpful!

          Anyway, hope things are going well for you.



          Comment


          • #20
            Yes, and yes.

            Comment


            • #21
              Hello,

              It has been a while since I posted on this topic. My question concerns adjusting my standard errors for clustering.

              Using -areg- I incorporated all precincts ("precinct" fixed effects with one precinct omitted to avoid collinearity) and 59 month dummies (one month omitted, also to avoid collinearity).

              To get right to the point, if this model is estimated via OLS (i.e., the standard LSDV estimator [see Post 13 for reference]), should I care about the t-values on the "precinct" dummies when employing cluster-robust standard errors? When I cluster on "precincts" the t-values become rather large (overwhelmingly large). Should I care about the size of the t-values associated with the individual "precinct" dummies? I am only reporting estimates for the treatment indicator(s).

              All thoughts/comments/concerns are helpful.

              Thank you.

              Comment

              Working...
              X