You are not logged in. You can browse but not post. Login or Register by clicking 'Login or Register' at the top-right of this page. For more information on Statalist, see the FAQ.
How does you critical values compare with those of Narayan(2005) for small samples up to say a 100 observations?
The Narayan critical values are much less precise due to the smaller number of replications in their simulations. As a consequence, you can even observe a non-monotic behavior of their critical values in some scenarios, i.e. with increasing sample size their critical values might go down, then up, and then down again. Our critical values behave smoothly.
The Narayan critical values are only tabulated for selected sample sizes and may need to be interpolated for sample sizes that lie in between. Our methodology predicts critical values for any sample size (abobe a minimum number of degrees of freedom).
The Narayan critical values do not take into account that the critical values in small samples depend also on the lag order (the number of short-run coefficients). Ours do.
There is another update of the ardl package (new version: 1.0.4) available on SSC and my personal website:
Code:
adoupdate ardl, update
This update fixes a problem in the postestimation command estat ectest with the extrapolation of p-values for very large values of the bounds test statistics. In the previous version, large values of the F-statistic or t-statistic could produce bizarre and unreliable p-values. Now it is ensured that the p-values equal 0 for such extreme test statistics.
Hello Sebastian, how does the model determine if variables are omitted from the short run output? All 4 of my explanatory variables are listed in LR, but only 2 are listed in the SR output.
If you estimate the model with the ec option and the optimal lag order for a given variable is 0, then this variable does not have any other short-run effects beyond the adjustment to deviations from the long-run relationship.
Thank you for the reply. Another short question: Is it intentional that the regstore option only stores the ec coefficients, and not the ec1 coefficients if this option is selected?
When a variable enters the model with 0 lags, the ec1 still option reports a long-run coefficient and a short-run coefficient. However, the reported short-run coefficient is just a function of the long-run coefficient and the speed-of-adjustment coefficient. It does not convey any additional information. In other words, the two coefficients are not separately identified. The variance-covariance matrix of the ec1 model in that case is rank deficient. For the underlying regress estimation results, this would require a nonlinear coefficient restriction. To avoid this complication, the regstore() option does not store estimation results with a lagged level term, when its maximum lag order is zero.
In a nutshell: Yes, this is intended. But for the other variables that have lags in the ARDL representation, it still stores the ec1 coefficients.
Hi, I am doing a research regarding the relationship between unemployment rate and inflation rate and I will be using the ARDL model for this study. I have already obtain the optimal lags for each variables, and now, I have to test for the stationarity of two variables that I have. The first variable (which is inflation rate) turns out to be stationary but the second variable (unemployment rate) is non-stationary. My question is, is it okay to run an ARDL model even though one of my variables is non-stationary? I really need an answer for this one, it would really mean a lot if someone can help me on this one. Thanks in advance!
The ARDL framework can accomodate both stationary and nonstationary variables. You do not even have to pre-test for stationarity. My recent article with Daniel Schneider (and the references therein) might be a useful resource to gain some understanding on the matter:
Regarding post #16, the reported p-value for the first lag of the dependent variable is not valid, that's why we perform the bounds t-test. But what about the p-values reported for the independent variables (e.g. the variable ln_inc in the LR section in Slide 15 of your 2018 presentation)? Are they valid? If yes, then a visual inspection of these p-values would make redundant (if I think correctly) the conduct of the individual Wald tests described in Step 3 in Slide 18.
John:
The long-run coefficients are asymptotically normally distributed, and therefore you can interpret the p-values in the usual way; see for instance the final paragraph on page 1460 of my recent OBES article with Daniel Schneider. A "visual inspection of these p-values" is nothing else than conducting a two-sided z-test, which is equivalent to an individual Wald test.
Hello,
I have some questions about the ARDL-ECM Model:
1- I test the unit root test by using ARDL and PP to check the order of integration then I found that the dependent variable was stationary at the level and when I used the KPSS and ERS tests, I found that the dependent variable was stationary at the first difference so it's possible here to estimate the ARDL-ECM model based only on KPSS and ERS?
2- To check the cointegration, We should first detect the maximum lags of all variables (Fundamental condition of Bound test) but when I see the code That you are mentioned here, you just estimate the ARDL model in level with maxlags choosing freely or by chance ?? Then you are testing the bound test without putting the maximum lags of all variables in the same command '' estat ectest''. I want to know why we are checking the maximum lags by using this command '' matrix list e(lags)??
Can anyone help me please because I am so confused
Comment