Announcement

Collapse
No announcement yet.
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • What cause opposite results of normality graph and normality tests.

    i performaed "regress" and then predict the residuals and used histogram (histogram resid, kdensity normal" comand for graphical represnetation of residuals. the graph shows normally distributed residuals. but the test of Jarque-Bera provide totally opposite results.

    Jarque-Bera normality test: 31.72 Chi(2) 1.3e-07
    Jarque-Bera test for Ho: normality:


    swilk resid

    Shapiro-Wilk W test for normal data

    Variable | Obs W V z Prob>z
    -------------+--------------------------------------------------
    resid | 1251 0.77289 175.662 12.915 0.00000




    i am confused why the results are so contradictory.

    Click image for larger version

Name:	normal plot.png
Views:	1
Size:	199.0 KB
ID:	1405864 Click image for larger version

Name:	qnoram.png
Views:	1
Size:	99.0 KB
ID:	1405865

  • #2
    No; they are not totally opposite. You have an approximately normal distribution (good news); the distribution is nevertheless detectably non-normal with that sample size.

    I wouldn't ever prefer the Jarque-Bera test (uses asymptotic results inappropriately) to the Shapiro-Wilk test. I wouldn't ever prefer either to a quantile plot.

    Comment


    • #3
      Let me also add that there is really no reason to test normality of residuals in a regression based on 1,251 observations. In small samples, normal residuals are needed in order to assure that the t-statistics actually have a t-distribution. But in large samples, the central limit theorem asymptotically gets you to the same result. While it is possible to construct an example of pathological distribution of residuals that, even with a sample of 1,251, leaves you with a sampling distribution of the t-statistics that is far from t- (or, actually, normal in the case of large samples), such examples rarely arise in practice, and if you had one, the histogram and -qnorm- plot wouldn't look even remotely like a normal distribution.

      With large samples, my recommendation is not to even give normality a moment's thought unless you have prior knowledge that you are working with an extremely pathological situation. If you feel compelled, nevertheless, to think about it, the graphs you have done are the way to go. Formal hypothesis testing of normality will almost always reject normality in realistic data, but it doesn't at all invalidate your regression inferences if the graphical evidence is reasonably normal looking.

      Comment

      Working...
      X