Announcement

Collapse
No announcement yet.
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Interprtation of "sdtest" results

    Hey,

    I have some problems how to correctly interpret the last row of the output table of sdtest and the standard example from Stata's help looks like this:
    • [*=2]Pr(F < f) = 0.2862 2*Pr(F < f) = 0.5725 Pr(F > f) = 0.7138 [*=2]We cannot reject the hypothesis that the standard deviations are the same.
    1. Why can we not reject Ho? At which significance level or does this not matter?
    2. What is the correct interpretation of especially the term "2*Pr(F < f)"?
    3. Is there a difference if the term would be "2*Pr(F > f)" (instead of <)?
    4. What i
    5. s the right number to look at for the one-sided or two-sided test?
    Thanks a lot!!!

  • #2
    The output is similar to the ttest output. Look for the middle test result (P=0.5725); it is a two-sided test of the (null) hypothesis that the SDs are equal, the alternative hypothesis being that they are not equal (Ha: ratio !=1). Don't look at the leftmost and rightmost test results (The smaller of them is half the two-sided test, i.e., a one-sided test).

    Code:
             Ha: ratio < 1               Ha: ratio != 1                        Ha: ratio > 1
           Pr(F < f) = 0.2862          2*Pr(F < f) = 0.5725                  Pr(F > f) = 0.7138
    If we decided to use a 5% significance level and P>0.05, we have no reason to reject the null hypothesis of equal SDs.

    Comment


    • #3
      Dear Svend,

      Thanks a lot for your quick answer and does the direction of the greather-than sign (> or <) has any importance? Based on my own data I get both directions for a almost equal sdtest (see pcitures for more details):
      • 2*Pr(F > f) = 0.6929
      • 2*Pr(F < f) = 0.9713
      Question: Given a 10% significance level and P>0.1 in both results, should I not reject the null hypothesis of equal SD for both test?

      Click image for larger version

Name:	Screenshot 2014-07-11 14.18.04.png
Views:	3
Size:	150.4 KB
ID:	64359
      Attached Files

      Comment


      • #4
        I tend to consider Pr(F < f) a very short shorthand that isn't really logical. (Others may mean otherwise). It is about which alternative hypothesis the one-sided P-value corresponds to (Is the observed SD-ratio smaller or larger than 1?). The two-sided is just twice the one-sided. I always suggest to focus on the middle test result and to develop blind spots to the others. This applies to sdtest, ttest, and prtest.

        Comment


        • #5
          This question is for Svend Juul . What were to happen if the alternative hypothesis IS significant? I have a middle value of 0.04 in my sdtest var2==var1 syntax.

          Can I still run a paired t-test at this point or not? Thanks!

          Comment


          • #6
            you seem to be under a mis-apprehension - it is not a good idea to base your choice of type of t-test on a preliminary test of equality of variances - this introduces biases, something that has been known and written about at least since 1944 (Bancroft,TA (1944), "On biases in estimation due to preliminary tests of significance", The Annals of Mathematical Statistics, 15(2): 190-204 and lots of articles since then); if you are worried about unequal variances just do an unequal variances version of the t-test

            Comment


            • #7
              Thanks got flagging that early reference discussing the bias of the preliminary testing.

              Comment


              • #8
                you're welcome

                but I note that I mis-read #5 which actually asks about a PAIRED t-test - the issue does not arise here and there is no option, or need for one, for doing anything about unequal variances - it is not an issue for the paired test

                Comment


                • #9
                  Thank you, Rich Goldstein . I don't know why I was getting so stuck on this. It is good to know that for paired t-tests, unequal variances are a non-issue.

                  Comment

                  Working...
                  X