Announcement

Collapse
No announcement yet.
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Testing two models with different dependent variable but identical set of regressors

    Dear All,

    consider the following equations:

    Code:
    Y1 = a + b*X1+c*X2+error
    Y2 = d + e*X1+f*x2+error
    Basically the two models have different dependent variables and identical regressors. Do you think it is worthy to run a test to check which of those two models is preferable? In my understanding, a similar test will indicate which dependent variable is better modelled by X1 and X2. Is that a valid explanation?

    Regarding the test (if it is meaningful), I think I cannot use the command lrtest, which should work only for nested models. But could I construct an F-test, leveraging on the information I get from each estimation?

    Thanks in advance for your suggestions.

    Best,

    Dario

  • #2
    I think an F-test also requires nested models and a quick google seems to confirm that. If these are strictly OLS models, you might be able to get a sense of which one is better by comparing the R-squared statistic, but I'm not aware of a valid hypothesis test to test the null hypothesis there is no difference in the R-squared statistics across models like this. Still, depending on your goals, a much larger R squared statistic on one should tell you that the regressions explain much more variation than they do on the other (at least for your sample, if not for your population under repeated sampling).

    Comment


    • #3
      Daniel Schaefer thanks for your reply. I have seen the link you sent, before making my post. I should say that the idea I exposed does not look reasonable to me either. One of the person I am working with insists to follow that idea, but I find strange to compare models that are non nested and have different dependent variables. It looks like I am trying to compare two results, that actually cannot be compared.

      Comment


      • #4
        I think without giving a lot more detail for the context of this analysis, such an approach would generally be uninformative.

        One general idea would be if your two dependent variables represented two distinct subgroups, then this could be useful as a test of model or error invariance. I wouldn’t necessarily frame it in this way, but it is commonly enough done.

        Comment


        • #5
          @Lorenzo Guzzetti Thanks for your reply. The two models do not test subgroups. I have a set of data referring to the FDI inflow. In one case I use the per-capita FDI inflows, while in the other I use a transformation of the per-capita FDI inflows. Basically, I use the same number of observations in both cases. So the two equations are identical in terms of observations and regressors, the only difference being the dependent variable.

          Comment


          • #6
            Originally posted by Dario Maimone Ansaldo Patti View Post
            In one case I use the per-capita FDI inflows, while in the other I use a transformation of the per-capita FDI inflows.
            So, wouldn't you want to choose between them on the basis of functional form (linearity), interpretability or that one or the other better represents the phenomenon that you're interested in?

            The insistence on deferring to some kind of null hypothesis statistical test result by one of the persons with whom you work seems misplaced; you might want to push back.

            Comment


            • #7
              Joseph Coveney You are right. Indeed, I pushed back the request.

              Comment

              Working...
              X