Announcement

Collapse
No announcement yet.
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #46
    Originally posted by FernandoRios View Post
    After jwdid and your dep variable you can interact all variables as usual, however you can’t do that with gvar and tvar
    those have to remain on the original time-scales. Interactions will make no sense.
    Based on the jwdid help file, my understanding was the gvar and tvar can be interacted with an independent variable which will give a result similar to DiD*x effect of the TWFE model
    Code:
     xtvar(varlist)      Variables declared are only interacted with the time variable
    
    xgvar(varlist)      Variables declared are only interacted with the group/cohort variable.
    If interaction is not possible, what is the purpose of xtvar and xgvar ?

    Edit: I think I was not clear with my original post. gvar and tvar are only interacted in xtvar() and xgvar(). Sorry for the confusion
    Last edited by Md Shoeb; 10 Sep 2024, 04:58.

    Comment


    • #47
      That wasnt clear from your previous message
      So, whenever you add a control variable, the default is to include it for treatment heterogeneity (Time cohort # variables) as well as interactions with time and cohort dummies.
      Sometimes, however, you may not want it to be directly interacted with GxT (no heterogeneity, but allow for more flexible form in the pre-treament. In those cases, you use xgvar and xtvar.
      Please see into the paper cited in the helpfile. We tried to include there the exact specification of what happens when you use those options/
      F

      Comment


      • #48
        Originally posted by FernandoRios View Post
        That wasnt clear from your previous message
        So, whenever you add a control variable, the default is to include it for treatment heterogeneity (Time cohort # variables) as well as interactions with time and cohort dummies.
        Sometimes, however, you may not want it to be directly interacted with GxT (no heterogeneity, but allow for more flexible form in the pre-treament. In those cases, you use xgvar and xtvar.
        Please see into the paper cited in the helpfile. We tried to include there the exact specification of what happens when you use those options/
        F
        Thank you for the response. I will definitely look at the papers cited. However, I am still not very clear. Am I incorrect to assume that adding xtvar and xgvar is equivalent to DiD##x1 in the TWFE model ? If yes, are there any models that support interaction of x1 with the DiD term ?

        Comment


        • #49
          No,
          consider the basic DID TWFE (and for a second assume its correct:

          y = a0 +a1 * post + a2 * treat + a3 * post x treat + FE_t + FE_i + e

          if you use gvar, it will do something like

          y = a0 +a1 * post + a2 * treat + a3 * post x treat + b1 * treat * Xs + FE_t + FE_i + e

          but treatment effect a3 is not affected by the new interaction (except for change in specification)

          Comment


          • #50
            Originally posted by FernandoRios View Post
            No,
            consider the basic DID TWFE (and for a second assume its correct:

            y = a0 +a1 * post + a2 * treat + a3 * post x treat + FE_t + FE_i + e

            if you use gvar, it will do something like

            y = a0 +a1 * post + a2 * treat + a3 * post x treat + b1 * treat * Xs + FE_t + FE_i + e

            but treatment effect a3 is not affected by the new interaction (except for change in specification)
            So the ATTs aggregated in jwdid only include treat*post even if xtvar and xgvar are included in the specification. Assuming that we are interested in both treat*post and treat*post*x1. Is there any way to estimate these two coefficients ?.

            Comment


            • #51
              if you are interested in treat*post*x1 standard syntax applies

              jwdid y x, ....

              which estimates

              y = a0 +a1 * post + a2 * treat + a3 * post x treat * X + b1 * treat * Xs +b2 * post* Xs+ FE_t + FE_i + e

              but heterogeneity is only possible for simple estimates.

              Comment


              • #52
                Hi Fernando,

                Thanks for this great package. Related to #15 in this thread, is there now a way to test for pretends when "not yet" are used as controls? That would be very helpful. thank you!

                Comment


                • #53
                  Originally posted by FernandoRios View Post
                  if you are interested in treat*post*x1 standard syntax applies

                  jwdid y x, ....

                  which estimates

                  y = a0 +a1 * post + a2 * treat + a3 * post x treat * X + b1 * treat * Xs +b2 * post* Xs+ FE_t + FE_i + e

                  but heterogeneity is only possible for simple estimates.
                  So I run jwdid y x, .....
                  the aggregation (estat simple or event) will be on a3 * post*treat*X ?

                  Comment


                  • #54
                    Originally posted by Fabrizio Leone View Post
                    Hi Fernando,

                    Thanks for this great package. Related to #15 in this thread, is there now a way to test for pretends when "not yet" are used as controls? That would be very helpful. thank you!
                    Not possible. If you use Not yet, there is no way to differentiate treated from not-yet treated and the timing. You still need to use Never
                    Or use other approaches for pretreatment (like running models for each pre-treatment period)

                    Md Shoeb
                    See the paper, it is explained there how things work

                    Comment


                    • #55
                      Originally posted by FernandoRios View Post

                      Not possible. If you use Not yet, there is no way to differentiate treated from not-yet treated and the timing. You still need to use Never
                      Or use other approaches for pretreatment (like running models for each pre-treatment period)

                      Md Shoeb
                      See the paper, it is explained there how things work
                      Okay. Thank you Fernando for giving so much of your time.

                      Comment


                      • #56
                        Dear Fernando,

                        I noticed that the results from jwdid and xthdidregress twfe can differ significantly, especially when the panel is unbalanced or there are missing observations. I suspect the discrepancies may arise from how these commands handle missing data. I have provided my code attached using the example dataset from the jwdid help file. I would greatly appreciate it if you could help clarify the source of these differences. Code and Observations

                        I conducted comparisons across the following setups:
                        1. Full sample: Results from jwdid with "notyet" and "never" options compared with xthdidregress twfe.
                        2. Sub-sample with missing dependent variables: After introducing random missing values to the dependent variable.
                        3. Sub-sample with additional controls: Results with time-constant control variables (i.X) added to the models.
                        Summary of Findings
                        • For the full sample, results for jwdid (with notyet) and xthdidregress twfe (with never) align well, while jwdid (with never) yields slightly different coefficients. Stata's xthdidregress twfe forced notyet to never, though.
                        • For the sub-sample with missing dependent variables, differences between jwdid and xthdidregress become more pronounced.
                        • With additional controls, the discrepancies persist, particularly when comparing jwdid with csdid and xthdidregress aipw.
                        Question

                        Could you provide insights into:
                        1. How jwdid and xthdidregress twfe handle missing observations differently, particularly in unbalanced panels?
                        2. Why the results diverge when controls are added or when samples include missing dependent variables?
                        3. Whether additional adjustments to my syntax are necessary for more consistent comparisons?
                        Attached Files
                        Last edited by Sung Ju Cho; 12 Dec 2024, 02:17.

                        Comment


                        • #57
                          Dead Sung Ju
                          Thank you for your comments regarding jwdid and Stata's xthdidregress. I cannot say much about all the internal details of their implementation, since I only made small checks on their work and mine, and we have had some back and forth regarding how it should or not be implemented. Beyond that, the decisions made for jwdid are mine alone, guided on the papers by Wooldridge, but based on my own interpretations.
                          I lll try to go over your example, but so far one point that may be of interest. Stata's Never its different from JWDID never. In fact, last time I checked, They do not have an equivalent to "never", so those results will not match.
                          For the rest, I will have to look at what is going on.
                          Edit:
                          One important aspect tho, that may be explaining the differences, is how you define Gvar.
                          with jwdid, GVAR is predefined. So regardless of how you drop the data, Gvar does not change for a given unit.
                          However, with xthdidregress, GVAR is not predefined, and instead determined by "treatvar". Thus, depending on which observations are dropped, Gvar could change, causing a difference in the estimations.
                          F
                          Last edited by FernandoRios; 13 Dec 2024, 07:16.

                          Comment


                          • #58
                            Originally posted by FernandoRios View Post
                            Dead Sung Ju
                            Thank you for your comments regarding jwdid and Stata's xthdidregress. I cannot say much about all the internal details of their implementation, since I only made small checks on their work and mine, and we have had some back and forth regarding how it should or not be implemented. Beyond that, the decisions made for jwdid are mine alone, guided on the papers by Wooldridge, but based on my own interpretations.
                            I lll try to go over your example, but so far one point that may be of interest. Stata's Never its different from JWDID never. In fact, last time I checked, They do not have an equivalent to "never", so those results will not match.
                            For the rest, I will have to look at what is going on.
                            Edit:
                            One important aspect tho, that may be explaining the differences, is how you define Gvar.
                            with jwdid, GVAR is predefined. So regardless of how you drop the data, Gvar does not change for a given unit.
                            However, with xthdidregress, GVAR is not predefined, and instead determined by "treatvar". Thus, depending on which observations are dropped, Gvar could change, causing a difference in the estimations.
                            F
                            Thank you for the detailed response and for pointing out the key differences between how jwdid and xthdidregress handle "never" and the definition of GVAR. Your clarification is particularly helpful in understanding the potential sources of divergence in the estimations.

                            I’ll revisit my code with this in mind and examine how these differences might be influencing my results. I also look forward to any additional insights you might provide after reviewing the example.

                            Thanks again for your time and expertise!

                            Comment

                            Working...
                            X