Announcement

Collapse
No announcement yet.
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Sebastian Kripfganz
    replied
    Could be that I am wrong. If you follow what is said in these blogs, you are probably on the safe side. I lack experience on Granger causality tests in Stata and thus I am afraid that I cannot be of any help in that respect. Sorry!

    Leave a comment:


  • Pandi Sarr
    replied
    Sebastian Kripfganz
    I see ok Thanks I will try that

    There are some blogs that suggest doing that though?
    Would it make sense if I estimated each variant of the model in basic OLS and then did the wald test for each model? Since the VAR only allows a set number of lags and the lags cannot be zero?
    and when I try to put the differenced variables in the exogenous variable list and do granger causality I get the following output, but this doesnt make sense does it?
    Code:
    var D.lnY, lags(1/2) exog(dum D.lnA DL.lnA D.lnB DL.lnB D.lnC DL.lnC D.lnD DL.lnD L.lnY L.lnA L.lnB L.lnC L.lnD) small dfk
    
       Granger causality Wald tests
      +------------------------------------------------------------------------+
      |          Equation           Excluded |     F      df    df_r  Prob > F |
      |--------------------------------------+---------------------------------|
      |                 _                ALL |  5.2104    16      27   0.0032  |
      +------------------------------------------------------------------------+
    Thanks again,

    Leave a comment:


  • Sebastian Kripfganz
    replied
    Babacar Mbengue:
    First and foremost, you need to reduce the number of lags in your model either with the option lags() or the option maxlags(). Given the small number of observations and the relatively large number of regressors, you cannot have a model with 4 lags of each of the regressors. This just results in too many parameters relative to the sample size. The EC model is just a reformulation of the ARDL model. If you correct one, the other is automatically corrected as well.

    The bounds test critical values for the t-statistic are only tabulated for the cases with unrestricted deterministic components, i.e. without the ardl option restricted. But actually, the restriction does not matter for the t-statistic. For case 2 you could just use the critical values for the t-statistic from case 3, and similarly for case 4 you can use the critical values from case 5.

    Leave a comment:


  • Sebastian Kripfganz
    replied
    @Pandi Sarr:
    Without seeing the output, my guess is that the actual lag order for lnT in the ARDL level representation is equal to 1 (not 0). For the second-specification, the option ec1 would add one lag for the variable lnT to be able to express the long-run relationship in terms of the t-1 variables (even though the maximum number of lags is set to zero). However, there would then indeed be a restriction imposed on the coefficient of this extra lag, which manifests itself in the observation that the short-run coefficient should equal the negative of the product of the corresponding long-run coefficient and the speed-of-adjustment coefficient, i.e. \(\omega = \alpha \theta\) on slide 9 of my Stata Conference presentation. All the other coefficients remain the same as if the model would be estimated without that extra lag. Compare with the results obtained with the ec option instead.

    Originally posted by Pandi Sarr View Post
    So if I use ARDL and change the dependent variable and run it 4 times for each variable that I have, it wouldn't yield results with sufficient evidence against a long run relationship? (given that the ardl bounds test fails to reject the null?)
    You should not do that. An underlying assumption of the single-equation ARDL / EC model is that there exists at most one cointegrating relationship that involves a given dependent variable. That means, the dependent variable in one model should not appear as a regressor in another. There would be an inherent endogeneity problem. The solution would again be to estimate a VAR / VEC model.
    Last edited by Sebastian Kripfganz; 04 Nov 2016, 10:37.

    Leave a comment:


  • Babacar Mbengue
    replied
    Hi Sebastian Kripfganz
    Sorry, my fault. Indeed, your t-statistic is larger than the critical value. But the reason is once again that the speed-of-adjustment coefficient exceeds zero. The t-test is designed for a one-sided hypothesis test based on the assumption that the speed-of-adjustment coefficient falls into the range [-1, 0]. Under the null hypothesis, it is zero. Under the alternative hypothesis, it is negative. It does not make sense to apply the bounds test to the t-statistic if the latter has a positive sign. The model is simply misspecified and/or poorly estimated.
    For the misspecification, which model should I correct the ardl or ec conditionnal? Sometimes I have a well specification for ardl but not for ec conditionnal. I remark also in the btest stata do not give the t-statistic, what's appen in that case?

    Leave a comment:


  • Pandi Sarr
    replied
    Sebastian Kripfganz
    Thanks for your quick response!
    So if I use ARDL and change the dependent variable and run it 4 times for each variable that I have, it wouldn't yield results with sufficient evidence against a long run relationship? (given that the ardl bounds test fails to reject the null?)

    Also i find that the ARDL coefficients of the following models vary even though they present the same number of lags, their corresponding bounds test also present different results
    :
    Code:
    ardl lnY lnN lnL lnT, trendvar(YR) aic maxlags(2) ec1 exog(DumP) regstore(ec1aic2)
    ​​​​​​​ardl lnY lnN lnL lnT, trendvar(YR) aic maxlags(2 1 1 0) ec1 exog(DumP) regstore(ec1aic1)

    Leave a comment:


  • Sebastian Kripfganz
    replied
    The ardl command is not directly suitable for Granger causality tests because it includes the regressors not only lagged but also contemporaneously. Given that you have a multivariate model, I do not see what would be wrong with vargranger.

    The options ec and ec1 for ardl only affect the way how the coefficient estimates are displayed, namely in the error-correction representation. No restrictions are imposed by specifying either of these options. To impose an restricted intercept or time trends, you need to use the option called restricted.

    For postestimation commands such as bgodfrey or dwatson, you need to store the estimation results with the regstore() option, as you have done in your example. You can then recover the underlying regress estimates and use all available regress postestimation commands, e.g.
    Code:
    ardl lnY lnN lnL lnT, trendvar(YR) lags(2 1 1 0) ec1 exog(DumP) regstore(ec1aic1)
    estimates restore ec1aic1
    estat bgodfrey
    estat dwatson

    Leave a comment:


  • Pandi Sarr
    replied
    Hi,
    can anybody help me...

    How can I go about doing a granger-causality test for an ARDL model given that my model is:

    Code:
    ardl lnY lnN lnL lnT, trendvar(YR) lags(2 1 1 0) ec1 exog(DumP) regstore(ec1aic1)
    specifically, what is the code for it? as vargranger only allows granger after VAR models.

    Also, Pesaran does the bounds testing on an conditional ECM which has unrestricted coefficients, But In stata the bounds testing only works when option ec or ec1 is enabled wouldn't this make the coefficients restricted? And are there any postestimation techniques that can be used like bgodfrey dwatson etc?
    Much appreciated
    Last edited by Pandi Sarr; 03 Nov 2016, 21:16.

    Leave a comment:


  • Sebastian Kripfganz
    replied
    Sorry, my fault. Indeed, your t-statistic is larger than the critical value. But the reason is once again that the speed-of-adjustment coefficient exceeds zero. The t-test is designed for a one-sided hypothesis test based on the assumption that the speed-of-adjustment coefficient falls into the range [-1, 0]. Under the null hypothesis, it is zero. Under the alternative hypothesis, it is negative. It does not make sense to apply the bounds test to the t-statistic if the latter has a positive sign. The model is simply misspecified and/or poorly estimated.

    Leave a comment:


  • Babacar Mbengue
    replied
    Sebastian Kripfganz

    Yes, I reduce the number of lags of my independant variables. I think, I understand now cleary the bounds test. But In my example I have t > critical value for I(0) regressors according to the output I should accept the null hypothesis. Thanks for your prompt answers. I write a memory for a graduates, can I cite you in my document. Other thing, can we do the test of stability of the model (Inverse root AR/MA polynomials)?
    Last edited by Babacar Mbengue; 03 Nov 2016, 12:44.

    Leave a comment:


  • Sebastian Kripfganz
    replied
    Did you reduce the number of your estimated coefficients to get a meaningful estimate of the speed-of-adjustment coefficient in the [-1, 0] range? The t-statistic from your last output seems to be the same as the t-statistic of the speed-of-adjustment coefficient from your earlier regression output.

    Please also have a look at my presentation at this year's Stata Conference, in particular slide 10.

    You would first compare the F-statistic to its critical values. In your example, you clearly reject the null hypothesis of no level relationship at all significance levels. In the next step, you use the t-statistic to test the null hypothesis that the speed-of-adjustment coefficient is equal to zero. Again, in your example you reject this null hypothesis. Overall, you thus conclude that a long-run level relationship exists if in addition at least one of the regressors has a statistically significant long-run coefficient. If all of them are insignificant, then your dependent variable is purely I(0).

    Note that you can use the p-values for your long-run coefficients directly from the regression output to decide about their statistical significance. On the other side, you cannot use the p-value from the regression output for the speed-of-adjustment coefficient. For the latter, you have to use the critical values from the bounds test.

    Leave a comment:


  • Babacar Mbengue
    replied
    Thanks Sebastian Kripfganz ! it's working now. I run the estat btest command and I have the result below :

    Click image for larger version

Name:	res4.png
Views:	1
Size:	16.5 KB
ID:	1362802

    Now I worried about the t-statistic. In fact, according to Pesaran & al.(2001), the t-statistic test for a specific case of the Pesaran bounds test. I don't Know how to interprete the result.

    -Should I use the t-statistic to test a degenerate case ( ie when , in the conditional ECM, the coefficient of the lagged dependant variable is not significant and the coefficients of lagged regressors are significant, or when all coefficient of lagged regressors are not significant and that one of the lagged dependant variable is significant)?

    Leave a comment:


  • Babacar Mbengue
    replied
    Sebastian Kripfganz

    There was a devalution in 1994 in Senegal, so I do the test of chow like this
    Code:
    gen d=(year>1994)
    gen x1=d*ltxx
    gen x2=d*lfbcf_pub
    gen x3=d*lide
    gen x4=d*ldebt_ext
    gen x5=d*ldef_fbcf_priv
    reg lfbcf_priv d ltxx lfbcf_pub lide ldebt_ext ldef_fbcf_priv x1 x2 x3 x4 x5, r
    Click image for larger version

Name:	res4.png
Views:	1
Size:	18.2 KB
ID:	1362776

    Code:
     test d x1 x2 x3 x4 x5
    HTML Code:
         . test d x1 x2 x3 x4 x5
    ( 1) d = 0
    ( 2) x1 = 0
    ( 3) x2 = 0
    ( 4) x3 = 0
    ( 5) x4 = 0
    ( 6) x5 = 0  
    F( 6, 24) = 5.18 Prob > F = 0.0015 .
    end of do-file    
    I don't Know if I will have the same result with the ardl command
    Last edited by Babacar Mbengue; 03 Nov 2016, 08:19.

    Leave a comment:


  • Sebastian Kripfganz
    replied
    I should have spotted this earlier: You have only 32 observations in total but try to estimate 30 parameters. That cannot work! You definitely need to considerably reduce your model: less variables and/or shorter lags.
    Last edited by Sebastian Kripfganz; 03 Nov 2016, 07:22.

    Leave a comment:


  • Babacar Mbengue
    replied
    Sebastian Kripfganz
    A speed-of-adjustment coefficient of about 7 is a clear sign that something is wrong as this would indicate a very explosive process. Unfortunately, it is not possible to identify the reason for this result based on the estimates alone. It might be that the ARDL model is just not suited to fit your data. Do you have any structural breaks in your time series? As a first step towards identifying the problem, you might want to visually compare the fitted values from the regression in levels (without the ec option) with the original data, e.g.:
    Code:
    ardl lfbcf_priv ltxx lfbcf_pub lide ldebt_ext ldef_fbcf_priv, lags(4 4 3 4 4 4) trendvar(timevar) predict fit, xb twoway (tsline lfbcf_priv) (tsline fit)
    Good morning, I run the code above, and that is the result I have
    Click image for larger version

Name:	Graph1.png
Views:	1
Size:	25.5 KB
ID:	1362771


    I check also for the Heteroskedasticity of the ardl model and the test of Breusch-Pagan / Cook-Weisberg reports the presence of Heteroskedasticity

    estat hettest

    Breusch-Pagan / Cook-Weisberg test for heteroskedasticity

    Ho: Constant variance

    Variables: fitted values of lfbcf_priv


    chi2(1) = 5.33

    Prob > chi2 = 0.0209



    That might be the source of the biaised coefficient?
    A another question, how can I apply the test of chow in the ARDL model?
    Last edited by Babacar Mbengue; 03 Nov 2016, 07:50.

    Leave a comment:

Working...
X