Announcement

Collapse
No announcement yet.
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Interpreting regression results

    Hi there,

    I analyze the effect of the document file size of an annual report on a firm´s stock market performance (cumulative abnormal return) using a market adjusted model and a market model.

    The document file size is the explanatory variable. At first, I only use a univariate regression model. Note that a cumulative abnormal return of (for example) of 15 % is 0.15 in the data set.

    I wonder if my interpretation of the regression results are correct.


    Click image for larger version

Name:	Regression Results.jpg
Views:	1
Size:	21.0 KB
ID:	1674874





    Click image for larger version

Name:	Interpretation Regression Results.jpg
Views:	1
Size:	72.5 KB
ID:	1674875




    Would you be so kind and tell me if this economic interpretation of the regression results are correct?

    Thank you very much! (Sorry for the formatting!)
















  • #2
    The calculations are done correctly. I take issue with the language "the cumulative abnormal return ... is..." Regression models are simply not that deterministic unless, as almost never happens in real life, R2 = 1. Better to say "the expected cumulative abnormal return ... is ...."

    Stylistically, working with coefficients of the order of 10-11 and predictor variables in the millions, spelled out to 7 significant figures, is awkward. If I were working on this, I would rescale the returns outcome variable to percent rather than proportions, and I would rescale the document file size to millions of characters. That would make the numbers "more normal" appearing. Nothing substantive would change as a result of that rescaling.

    Comment


    • #3
      I am not aware of a reason why what OP has written should be correct.

      I agree with Clyde that OP should choose reasonable units of measurement of both dependent and independent variables. For example percent for the dependent, and whatever units of the independent that give human-readable coefficients.

      Comment


      • #4
        Konrad: If you're looking for suggestions to improve the regression, I have a couple. First, you do not want to report coefficients that have so many zeros after the decimal point. You could rescale your y and x variables, but I have a somewhat different suggestion. First, I would rescale y so it is actually a percent. Therefore, multiply it by 100. Then, because you seem interested in estimating the effect of increasing the characters by a certain percent, estimate that effect directly.

        Code:
        replace CAR = 100*CAR
        gen lfilesize = log(filesize)
        reg CAR lfilesize, vce(robust)
        You can then increase lfilesize by, say, 0.10 -- roughly a 10 percent increase. Multiple 0.1 by the coefficient on lfilesize and that will give the percentage point change in CAR.

        Also, I know it varies by discipline, but putting standard errors, not t statistics, below coefficients is preferred. One can always compute the t statistic but can also more easily compute a confidence interval.

        How to justify taking the log other than interpreting the coefficient? Compare the R-squared from using the level of filesize versus the log. Obviously I don't know which one will be bigger. If you settle on the level of filesize you should divide it by something like 100,000 so that it is measured in hundreds of thousands. That will make the equation look better (along with defining CAR to be a percent).

        JW

        Comment

        Working...
        X