Announcement

Collapse
No announcement yet.
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • linktest and ovtest produce conflicting results

    I am doing model specification test for my regression with two methods, linktest and ovtest.

    Linktest tells me that there is specification error (_hatsq is significant). But ovtest shows that there is not specification error (p value is 0.20). So in such a case, which result shall I rely on?

    Thank you very much!

  • #2
    they are testing different hypotheses; as the manual says,
    linktest is formally a test of the specification of the dependent variable, it is often
    interpreted as a test that, conditional on the specification, the independent variables are specified
    incorrectly.
    while ovtest is a test for whether there are omitted variables (see the help file: help regress postestimation##estatovt

    Comment


    • #3
      Originally posted by Rich Goldstein View Post
      they are testing different hypotheses; as the manual says,

      while ovtest is a test for whether there are omitted variables (see the help file: help regress postestimation##estatovt

      Thank you very much. Actually in a manual by UCLA, linktest is used as a parallel alternative of ovtest. And I have also read the stata manual of linktest.

      So my interpretation is that a failed linktest test (_hatsq is significant) means that what is misspecified is dependent variable, rather than independent variable. In order to solve this error, I should transform dependent variable (e.g. generate its reciprocal, like the example 2 in the stata manual), but should not deal with independent variable (as example 1 in the manual).

      Is my interpretation correct?

      Thank you very much again!

      Comment


      • #4
        given the small amount of information you have presented, then, yes you could do that (or you could use glm with a link function that might serve the same purpose)

        Comment


        • #5
          Dear Alex,

          To be honest, I think Stata does a really poor job with these tests (I hope someone from StataCorp reads this!). There are several points to be made here:

          - As far as I understand, -ovtest- is a RESET test. Therefore it is a check of the correct specification of the conditional mean; conditioning on the set of variables being used. Therefore, it is not a test for omitted variables as the name suggests; this is very confusing and that may explain Rich's comment in #2.

          - The particular form of the RESET that is implemented in -ovtest- does not allow the user to choose the number of powers to include and does not allow the use of robust standard errors. These are serious limitations.

          - Form what I read in the manual, -linktest- is a simplified version of the RESET. It allows the use of robust standard errors but the number of powers is again fixed. However, it is not a true RESET test and I guess it will have less power; to put it differently, I am not aware of any optimal properties for this form of the test.

          - More important, I find it shocking that the manual says that it is a test of the specification of the dependent variable. The choice of dependent variable is dictated by the substantial question that we are interested in; no statistical test can check that!

          Given the drawback of these procedures, I suggest that you simply run the RESET "by hand", using a suitable number of powers (just squares or squares and cubes) and with appropriate standard errors.

          Best of luck,

          Joao

          Comment


          • #6
            Originally posted by Joao Santos Silva View Post
            Dear Alex,

            To be honest, I think Stata does a really poor job with these tests (I hope someone from StataCorp reads this!). There are several points to be made here:

            - As far as I understand, -ovtest- is a RESET test. Therefore it is a check of the correct specification of the conditional mean; conditioning on the set of variables being used. Therefore, it is not a test for omitted variables as the name suggests; this is very confusing and that may explain Rich's comment in #2.

            - The particular form of the RESET that is implemented in -ovtest- does not allow the user to choose the number of powers to include and does not allow the use of robust standard errors. These are serious limitations.

            - Form what I read in the manual, -linktest- is a simplified version of the RESET. It allows the use of robust standard errors but the number of powers is again fixed. However, it is not a true RESET test and I guess it will have less power; to put it differently, I am not aware of any optimal properties for this form of the test.

            - More important, I find it shocking that the manual says that it is a test of the specification of the dependent variable. The choice of dependent variable is dictated by the substantial question that we are interested in; no statistical test can check that!

            Given the drawback of these procedures, I suggest that you simply run the RESET "by hand", using a suitable number of powers (just squares or squares and cubes) and with appropriate standard errors.

            Best of luck,

            Joao

            Dear Joao,

            Thank you so much! It is also my feeling that we cannot simply transform dependent variables as the linktest indicates, since they should be determined by theory or empirical experience.

            Just another two questions I hope to have your valuable answers. I tried both "ovtest" and "ovtest rhs", and my model passes "ovtest" but not "ovtest rhs" (the difference in p value is very huge). so for this case, shall I believe that my model has potential specification errors?

            And, if my aim is to do a fixed-effects regression, can I still use "reg y x + ovtest" to do RESET? In other words, the RESET result is generated from a pooled-OLS regression, but not from a fixed-effects regression. I know that ovtest cannot work after xtreg, fe. For this question, I have seen different opinions.

            Thank you so much again!

            Alex

            Comment


            • #7
              Dear Alex,

              Without knowing whether you need robust standard errors I would not trust any of the tests. As I said, what you have to do is to perform the test by hand using the appropriate standard errors. You can also do this with a panel, using the usual fixed effects and clustered standard errors.

              All the best,

              Joao

              Comment

              Working...
              X