Announcement

Collapse
No announcement yet.
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Hello Sebastian

    Great work you guys have done on ardl command. It seems that ardl uses the regular delta method to calculate standart errors for LR equation. E-views presents diferent se for long run equation, which they claim is a delta method proposed by Pesaran and Shin (1998). Do you know how to get the Pesaran and Shin (1998) standart errors in stata?

    Thanks in advance.

    Comment


    • Kindly help me with the ardl command to find maximum lags to be used. My depvar is bank benchmark rate and independent variables are monetary policy rate(repo),cost of funds and non performing loans (log of gnpa) with 36 observations.

      I ran the following command in stata
      ardl bnchmrk repo lgnpa cof, aic
      ardl bnchmrk repo lgnpa cof, bic

      aic mentions ARDL(1,1,0,0) regression and bic mentions ARDL(1,0,0,0) regression.

      I am confused about the selection . Please help

      Also please help me with the vecrank command in stata which i used to find co-integrating vector as mentioned in an earlier post here. How to interpret the output?

      //* vecrank bnchmrk repo lgnpa cof

      Johansen tests for cointegration
      Trend: constant Number of obs = 34
      Sample: 3 - 36 Lags = 2
      -------------------------------------------------------------------------------
      5%
      maximum trace critical
      rank parms LL eigenvalue statistic value
      0 20 76.161843 . 57.4386 47.21
      1 27 89.835272 0.55261 30.0917 29.68
      2 32 97.807025 0.37433 14.1482* 15.41
      3 35 104.19286 0.31315 1.3765 3.76
      4 36 104.88112 0.03968
      *//

      Does this mean that there is 2 cointegrating vectors? So can I use the ARDL model here?

      Thanks in advance.
      Last edited by Anil Raj; 20 Feb 2019, 11:32.

      Comment


      • Marcelo Dias-Paes-Ferreira:
        We just compared the output from our ardl Stata command to the EViews (version 9.5) table "ARDL Long Run Form and Bounds Test" (subtable "Levels Equation"). The coefficients and standard errors exactly coincide. Not sure, where you have seen different standard errors in EViews.

        Pesaran and Shin (1998) compare the conventional Delta-method standard errors (which are implemented in our ardl package) with standard errors obtained from an asymptotic formula (valid only under the assumption that the long-run forcing variables are I(1)). They conclude:
        Therefore, the standard error for the estimator of the long run parameter, µ, obtained using the Δ-method is asymptotically the same as that given by [the asymptotic formula], which was derived assuming that xt is I(1). One important advantage of the variance estimator obtained by the Δ-method over the asymptotic formula [..] lies in the fact that it is asymptotically valid irrespective of whether xt is I(1) or I(0), while the latter estimator is valid only if xt is I(1).
        The two variance estimators clearly differ in …finite samples.
        Given that the bounds test does not require all variables to be I(1) and the asymptotic equivalence of the two methods if this was the case, we currently do not plan to implement alternative standard errors.

        Reference:
        Pesaran, M. H. and Y. Shin (1998). An autoregressive distributed-lag modelling approach to cointegration analysis. In S. Strom (Ed.), Econometrics and Economic Theory in the 20th Century. The Ragnar Frisch Centennial Symposium, Chapter 11, pp. 371-413. Cambridge: Cambridge University Press.



        Anil Raj:
        The BIC tends to select more parsimonious models than the AIC. If your sample size is rather small, you might prefer the BIC to avoid the estimation of too many parameters. Otherwise, in particular if you want to carry out the bounds test, you might prefer the AIC because the chance to have remaining serial error correlation is reduced in models with richer dynamics (i.e. higher lag orders). There is a vast literature on model selection criteria that you can easily find online if you need more background information.
        https://www.kripfganz.de/stata/

        Comment


        • After running the estat ectest when I try to run estat hettetst/imtest/durbinalt stat14 shows the error "estat hettest not valid". Please help.

          Comment


          • You need to restore the underlying regress estimation results first before you can use these standard postestimation commands. Please see slides 27 to 29 of my presentation at last year's London Stata Conference:
            Kripfganz, S. and D.C. Schneider (2018). ardl: Estimating autoregressive distributed lag and equilibrium correction models. Proceedings of the 2018 London Stata Conference

            Further discussion of the new version of the ARDL command in the following Statalist topic:
            ARDL: updated Stata command for the estimation of autoregressive distributed lag and error correction models
            https://www.kripfganz.de/stata/

            Comment


            • Thank you so much. I got it right.

              Comment


              • But estat sbcusum command says its not valid, all other post estimation are working?

                Comment


                • If you are using an older version than Stata 15, estat sbcusum is not available. You can use the community-contributed command cusum6 (available from SSC) instead.
                  https://www.kripfganz.de/stata/

                  Comment


                  • Hi Sebastian,

                    I have been reading through the forum and followed your ARDL EC approach with my data, which might be very stupid questions but I still can't figure it out.

                    My main question is about the interpretation of the results generated:

                    1. As far as I understand, we can directly interpret the LR coefficients (i.e. +0.5 with a significant p-value): a 1% increase in X can cause a 0.5% increase in Y (dependent) - is that correct? Even the dependent variable is in the form of d.lnY??

                    2. For the short run results from ec1, can we directly interpret those as coefficients just as above?? If not, what further calculations do we need to do?

                    3. Again with the short run results, but from ec, only one model, my model ARDL (1,0,1) only generates one variable under short run, why is this different from the results from ec1? Do we interpret it as coefficients directly?

                    Thank you very much for all your kind input to this command - it has been helping me a lot!

                    Comment


                    • 1. To be a bit more precise, a 1 unit increase (1% if X is in logs) is causing an 0.5 unit increase (0.5% if Y is in logs) in the long-run equilibrium. You can look at it from the perspective of the error-correction term: EC = Y - theta * X. In the long run, EC=0. Thus, with theta = 0.5, if X increases by 1 unit, then Y must increase by 0.5 units to retain the balance.

                      2. The short-run coefficients are interpreted as usually in linear regression models. The coefficient of D.X measures the immediate effect of a change in D.X on D.Y, holding everything else constant.

                      3. The interpretation of the short-run coefficients is the same in both the ec and ec1 versions of the model, as long as the respective variable in the ARDL model enters with at least one lag. If a variable has zero lags in the ARDL form, then the ec1 representation creates an overparameterization. There will be an exact relationship between the respective short-run coefficient and the long-run and speed-of-adjustment coefficients. (If there is no lag in the ARDL form but we introduce a lagged variable in the long-run relationship, the short-run terms need to counteract this.) It does not make much sense to separately interpret the short-run coefficient in this case, which is why I usually recommend to use the ec version instead of ec1.
                      https://www.kripfganz.de/stata/

                      Comment


                      • My depvar is bank benchmark rate and independent variables are monetary policy rate(repo),cost of funds and non performing loans (log of gnpa) with 36 observations.
                        I ran the ardl command "ardl bnchmrk repo lgnpa cof, aic" ( ARDL(1,1,0,0) regression), then " ardl bnchmrk repo lgnpa cof, aic ec" (ARDL(1,1,0,0) regression) and then "estat ectest". I have the following queries
                        1)Why it omits 4 observations and showing number of observations as 32, is not the lags 1 1 0 0 then?
                        2) estat ectest returned results of F = 6.085 t = -3.806. While F is higher than I(I) at 5% value of 4.988 but t is lesss than 5% value of -3.871. How I conclude about long term relationship?

                        Variable list is attached. Request to please clarify.
                        Attached Files

                        Comment


                        • 1) The number of observations is determined by the maxlags() option. By default, there are 4 observations reserved for the initial observations. The sample is left unchanged even if the optimal lag order is smaller than 4. Otherwise, the comparison of different lag orders would not be valid.

                          2) There is no perfect relationship between the F-statistic and the t-statistic. See my slide number 18 of my presentation at last year's London Stata Conference on how to do inference based on the test results. (Note that there is a typo on that slide. Step 3 should read as "If H0t is rejected, ...".) If the test based on the F-statistic rejects the null hypothesis but the test based on the t-statistic does not reject the null hypothesis, then you did not find enough statistical evidence in favor of a long-run relationship.
                          https://www.kripfganz.de/stata/

                          Comment


                          • Thank you Sebastian.

                            Regarding the maxlags you mentioned above, when I didnt specify the maxlags, an optimal selection of (3,4,4,3,2) was chosen although I only had a sample of 35 observations - I believe this ardl model cannot work in this scenario - is that correct? And why exactly doesn't it work?

                            (Provided I identified more than one cointegrating vector with this lag selection - I believe ardl also doesnt apply to more than one cointegrating vector.)

                            Comment


                            • Thank you Dr.Sebastian.
                              By default ardl option for the above data set runs ARDL(1,1,0,0) regression and estat ectest shows higher F value and t value which confirms a long term relation. But when I try to use maxlags option and give maxlag(1 2 0 2) for bounds test the F and t values is reduced and implies no long term relation. Does this means that the lagged values of variables has no long term relation?

                              Comment


                              • Sebastian Li
                                If I have counted correctly, an ARDL(3,4,4,3,2) model would estimate 21 coefficients. Given a sample size of 35 observations, this would usually be considered as too many parameters. You would either need to restrict the number of variables or the number of lags to have a chance to obtain reliable parameter estimates.

                                An underlying assumption of the ARDL model is that there exists at most one cointegrating relationship that involves the dependent variable. (There might be additional cointegrating relationships among the independent variables themselves.)
                                https://www.kripfganz.de/stata/

                                Comment

                                Working...
                                X