Announcement

Collapse
No announcement yet.
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Interpreting standardized beta coefficients of transformed predictors

    I've run a robust ordinary least squares model (HC3 correction) that attempts to predict a numeric dependent variable from seven numeric independent variables. However, several of the predictors have been log- or sqrt-transformed. My question: can the "bStdXY" standardized beta coefficients (given from the listcoef command) be directly interpreted for the transformed predictors, or does some kind of "back-transformation" need to be undertaken first (as would be necessary in order to interpret the regular beta coefficients)?

    In other words, and in the example of a log-transformed predictor, will a standardized beta show the relative predictive strength of the log of predictor, or of the predictor itself?

  • #2
    In other words, and in the example of a log-transformed predictor, will a standardized beta show the relative predictive strength of the log of predictor, or of the predictor itself?
    To the extent that standardized beta coefficients show anything useful at all, they are, in your situation, about the log of the predictor, not the predictor itself.

    Comment


    • #3
      I'm hoping to be able to make comparisons about the predictive value of certain predictors over others. For instance, if a log-transformed predictor [Predictor A] has a bStdXY value of 0.50 and a different (non-transformed) predictor [Predictor B] has a bStdXY value of 0.25, I would only be able to say that the log of Predictor A has stronger predictive utility than Predictor B. Correct?

      Ultimately, I'm not sure there's much of a substantive difference either way, since in either case it's the underlying variable at work (transformed or otherwise).

      Comment


      • #4
        I would only be able to say that the log of Predictor A has stronger predictive utility than Predictor B. Correct?
        Incorrect, and having nothing to do with transformations. The notion that you can say that one predictor has stronger predictive utility than another based on standardized coefficients is just a myth, an illusion. A widely held and widely taught myth, to be sure. But a myth, nonetheless.

        You can only say that if the two predictors are measured in the same units (which standardization gives you) and have the same distribution in the underlying variable (which standardization does not give you). Since two predictor variables rarely have the same distribution in real life, the whole idea of saying that one variable is a stronger predictor than another is basically a fool's errand. Find something better to do with your time.

        Added: In your particular situation, the futility of trying to compare predictive strength is even worse than usual. Even in the unusual circumstance where log(var1) and sqrt(var2) had the same distribution, the untransformed var1 and var2 would necessariliy have different distributions, so no comparison could be drawn between them anyway.
        Last edited by Clyde Schechter; 19 Jul 2018, 12:18.

        Comment

        Working...
        X