Announcement

Collapse
No announcement yet.
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Difference in differences with Fixed Effect Model

    Hi everyone, i have several questions with my difference in difference research.

    1. The problem is i cannot estimate my difference in difference model using dummy variables (treatment, year, and treatment_year). Stata always omitted my treatment variable, it said there is a collinearity between treatment and other dummy variable. is there any solution for this problem? or how can i estimate the dummy interaction variable with fixed effect? i want to get the three estimation of it.

    for the information:
    1. treatment 1 is area with program (treatment:254, 0 as control=242)
    2. year 1 is 2017, 0 for 2013
    3. treatment_year= interaction between both dummy variables above.
    4. N= 498 x 2(year)= 984t

    2. ohya, i have estimate that data with random effects (xtreg treatment_year treatment year), but the result is quite different with the command "xtreg dependen i.treatment#i.year". which one is better for my result study?

    i have read many of post with did topics, but i cannot solve my problem until now...

    thank you.


    Best, Yusuf

  • #2
    If you use a fixed effects model, your treat variable will be colinear with the fixed effect: this is because each unit in your study (firm, state, whatever it is...) is either in the treatment group or not and that is the same for every observation for that unit. Consequently Stata will output the treat variable. This is not a problem. Your model is still valid and the coefficient of the interaction is still interpretable as the DID estimator of the intervention effect.

    If you use a random effects model, there is no such problem.

    There is nothing unusual about getting very different results with fixed and random effects. This is a general situation and has nothing to do with DID estimation itself. Fixed and random effects estimators estimate different things. A fixed-effects model estimates the within unit effects of variables, whereas random-effects models estimate a parameter that averages the within and between unit effects of variables (basically the random effects model presumes that the within and between effects are the same and estimates a common value.) When the fixed and random effects models produce drastically different results, that is a sign that the assumption that within- and between- effects are the same is incorrect and more or less invalidates the random effects results. (I'm somewhat oversimplifying this discussion in the interests of brevity.)

    So it sounds like you need to go with your fixed effects results, and there is no reason to be concerned about the omission of your treat variable.

    All of that said, interaction models are in general difficult for people to understand. Stata's -margins- command enables you to get more intuitive results. But to use it, you must re-do your regression using Stata's factor variable notation. Do not use your hand-calculated treatment_year variable. Instead, run:

    Code:
    xtreg outcome_variable i.treat##i.year /*perhaps some other covariates*/, fe /*perhaps vce(cluster some_unit)*/
    Then to interpret your results run:
    Code:
    margins treat#year
    to get the expected values of outcome_variable in each group in each year, and run
    Code:
    margins treat, dydx(year)
    to get the expected change in expected values of outcome_variable associated with change of year from 2013 to 2017 in each group.

    The treatment effect is estimated by the coefficient of 1.treat#1.year in the -xtreg- output (not the -margins- output).

    If you do that you will have all of the relevant parameter estimates without having to puzzle out the algebra underlying the interpretation of interaction models.

    Comment


    • #3
      Originally posted by Clyde Schechter View Post
      There is nothing unusual about getting very different results with fixed and random effects. This is a general situation and has nothing to do with DID estimation itself. Fixed and random effects estimators estimate different things. A fixed-effects model estimates the within unit effects of variables, whereas random-effects models estimate a parameter that averages the within and between unit effects of variables (basically the random effects model presumes that the within and between effects are the same and estimates a common value.) When the fixed and random effects models produce drastically different results, that is a sign that the assumption that within- and between- effects are the same is incorrect and more or less invalidates the random effects results. (I'm somewhat oversimplifying this discussion in the interests of brevity.)
      Thanks Clyde for the theory explanation, its clear for me.

      But, for the code in stata, i have tried your suggestion and the result is still same with the old ones when i run "margins treat#year";
      Click image for larger version

Name:	Capture.JPG
Views:	1
Size:	30.8 KB
ID:	1451457

      the result is different if i use random effect in estimation.. In my opinion, they cannot estimate the fixed effect margin because of omitted variables..

      Comment


      • #4
        Yes, sorry. To get this code to run, you have to add the -noestimcheck- option to the -margins- command. That is not something that should be done routinely or lightly--there is a reason for checking these things, after all. But in this context, a fixed effects model where treatment is colinear with the fixed effects, it is OK to use it.

        Comment


        • #5
          Clear Clyde!. Thanks

          Comment

          Working...
          X