Announcement

Collapse
No announcement yet.
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • placebo test using -permute-

    Dear statalist,

    I'm trying to do a placebo test in a DID by randomly assigning "treat" to my sample of firms, and re-run the regression, and then repeating that process for e.g., 500 times. I read about -permute- in several other posts https://www.statalist.org/forums/for...f-coefficients, https://www.statalist.org/forums/for...g-coefficients, https://www.statalist.org/forums/for...pecific-output, https://www.statalist.org/forums/for...-to-a-variable. And my code and results are as follows:
    Code:
    cap erase "simulations.dta"
    
    permute treat b = _b[treat] se = _se[treat] df = e(df_r), saving("simulations.dta") reps(500): reg y treat post DID asset marketcap firm_age ROA leverage i.industry i.year, vce(cluster industry)
    Code:
    Permutation replications (500)
    ----+--- 1 ---+--- 2 ---+--- 3 ---+--- 4 ---+--- 5 
    ..................................................    50
    ..................................................   100
    ..................................................   150
    ..................................................   200
    ..................................................   250
    ..................................................   300
    ..................................................   350
    ..................................................   400
    ..................................................   450
    ..................................................   500
    
    Monte Carlo permutation results                 Number of obs     =      1,370
    
          command:  regress y treat post DID asset marketcap firm_age ROA leverage i.industry i.year, vce(cluster industry)
                b:  _b[treat]
               se:  _se[treat]
               df:  e(df_r)
      permute var:  treat
    
    ------------------------------------------------------------------------------
    T            |     T(obs)       c       n   p=c/n   SE(p) [95% Conf. Interval]
    -------------+----------------------------------------------------------------
               b |   .0066014     255     500  0.5100  0.0224  .4652348   .5546463
              se |   .0142013      14     500  0.0280  0.0074  .0153906   .0465333
              df |         17     500     500  1.0000  0.0000  .9926494          1
    ------------------------------------------------------------------------------
    Note: Confidence intervals are with respect to p=c/n.
    Note: c = #{|T| >= |T(obs)|}
    I notice from other examples that b here should have a test column, where it details the lower, upper and two sided p value, whereas for mine, there is no such column. So I wonder 1) why that is the case, how to obtain the lower, upper and two sided p value, and 2) how to explain my result, what does the insignificant p value of 0.5100 mean?
    Code:
    Monte Carlo permutation results                    Number of observations =  70
    Permutation variable: did                          Number of permutations = 500
    
          Command: reghdfe y did, absorb(country year) vce(robust)
             beta: _b[did]
               se: _se[did]
               df: e(df_r)
    
    -------------------------------------------------------------------------------
                 |                                               Monte Carlo error
                 |                                              -------------------
               T |    T(obs)       Test       c       n      p  SE(p)   [95% CI(p)]
    -------------+-----------------------------------------------------------------
            beta | -2.519512      lower       1     500  .0020  .0020  .0001  .0111
                 |                upper     499     500  .9980  .0020  .9889  .9999
                 |            two-sided                  .0040  .0028  .0000  .0095
                 |
              se |  1.285631      lower     499     500  .9980  .0020  .9889  .9999
                 |                upper       1     500  .0020  .0020  .0001  .0111
                 |            two-sided                  .0040  .0028  .0000  .0095
                 |
              df |        53      lower     500     500 1.0000  .0000  .9926 1.0000
                 |                upper     500     500 1.0000  .0000  .9926 1.0000
                 |            two-sided                 1.0000  .0000      .      .
    -------------------------------------------------------------------------------
    Notes: For lower one-sided test, c = #{T <= T(obs)} and p = p_lower = c/n.
           For upper one-sided test, c = #{T >= T(obs)} and p = p_upper = c/n.
           For two-sided test, p = 2*min(p_lower, p_upper); SE and CI approximate.
    Thanks a lot for any kind help!

  • #2
    The lower and upper p-values are for one sided tests. Do you have a one sided test ? I think -permute- does not report those, because the null is something, tested against two sided alternative.

    You interpret the result of -permute- that your observed statistic is not unusual, that is, that your treatment has no effect.

    Comment


    • #3
      Hi Joro,

      Thanks a lot for your reply.

      You interpret the result of -permute- that your observed statistic is not unusual, that is, that your treatment has no effect.
      So you mean 1) my DID does not pass the placebo test, as the treatment has no effect, OR 2) my DID passes the placebo test, as the "fake" treatment group has no effect?

      Comment


      • #4
        Originally posted by Yun Cheng View Post
        Hi Joro,

        Thanks a lot for your reply.


        So you mean 1) my DID does not pass the placebo test, as the treatment has no effect, OR 2) my DID passes the placebo test, as the "fake" treatment group has no effect?
        For a disclaimer I do not know whether you can do a placebo test the way how you are doing it. I have never done these DID placebo tests, and I am not an expert in this field of practice.

        The way how I interpret this is 1), that your treatment does not pass the placebo test, because with the actual data your statistic is not unusual compared to the distribution of the statistic if you randomly assign the treatment.

        Comment


        • #5
          I see. Thanks for your explanation.

          Comment

          Working...
          X