Announcement

Collapse
No announcement yet.
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • How to report Bonferroni adjusted correlations and respective significance levels

    I have a question regarding Bonferroni adjusted correlations. I used Stata's estpost correlate command with Bonferroni option.

    My understanding is that by applying the Bonferroni adjustment I am taking the benchmark of 0.05 and dividing it by the number of variables. The question is does Stata automatically do this for the benchmark 0.01 and 0.001? I am asking because I am wondering how to report Bonferroni adjusted variables with regards to the significance levels.

    Thanks in advance,
    Andreas

  • #2
    Andreas:
    the arbitrary 0.05 threshold is divided by the number of (multiple) comparisons. The inverse approach would give you the significance level uncorrected for multiple comparisons, which is the one you should consider when reporting your results.,
    As far as I know (Stata 13.1) there's no way to change the significance level in -anova-: hence, that is set at 0.05.
    Kind regards,
    Carlo
    (Stata 19.0)

    Comment


    • #3
      Carlo gave insightful advice. I just wish to remark that Bonferroni correction is widely considered "too conservative", even for an alpha of 0,05. Therefore, I gather a lower alpha, say, equal to 0.001 plus a Bonferroni correction would entail two unwanted possibilities: either a type II error, or an enormous difference between groups, so huge that we'd barely need statistics to realize it.
      Best regards,

      Marcos

      Comment


      • #4
        Carlo, do I understand you correctly that the correlation tables already consist of the bonferroni corrected values? Hence, I would report these according to the significance levels as produced (* p < 0.10, ** p < 0.05, *** p < 0.01) by "estpost correlate" in combination with esttab?

        Here is the code I used:
        Code:
         qui estpost correlate var1 var2 var3 ...var20, /*
          */ matrix bon
        
         esttab using tada.rtf, replace not unstack /*
          */  compress noobs   /*
          */  star(* 0.05)
        @Marcos: I read different sources on Bonferroni corrections. It seams like opinions on the usefulness or appropriateness of bonferroni are quite divided. The overall echo as far as I understood is that if you don't apply it, you run the risk that you produce considerable Type-I errors. Others as you indicated argue that it is too conservative and might produce type-II errors. I made a comparison with my data. It reveals that without Bonferroni, almost every correlation is significant. If I apply Bonferroni, the numbers of significant correlation declines by rougly 20-30%. Lateron, I calculated an OLS model with all the variables that I used in the correlation analysis. Here, even less IVs are significantly predictors. That made me think of using bonferroni in the first place.
        Do you have an alternative suggestion?

        Comment


        • #5
          Andreas:
          rnethelp "http://www.stata-journal.com/software/sj14-2/st0085_2/estpost.hlp" will refer you to -correlate-; the same formulas as in -estpost- -estpost-. Hence your p-values are already corrected for Bonferrroni.
          Kind regards,
          Carlo
          (Stata 19.0)

          Comment


          • #6
            Hello Andreas,

            It is quite difficult, if not impossible, to give further suggestions without knowing the study design, variables, model, etc. That said, Bonferroni correction may be employed - an even recommended - by some, albeit for other reasons that choosing a strategy to decrease the number of significant correlations, as you witnessed in your data. After all, what for? Please keep in mind that a significant p-value for a given correlation is an ubiquitous finding and, what is more, it doesn't "tell" much. In short, it says that the correlation value is different from zero. On account of that, IMHO, correlations are mostly used under an exploratory basis.

            Hopefully that helps.

            Best,

            Marcos
            Best regards,

            Marcos

            Comment


            • #7
              My apologies for just coming back to the thread now. I really appreciate your great comments which helped me a lot.
              One of the reasons why I want to run correlations before multiple regression is that I do have a great number of variables
              of which some are indicators for a "new" dimension within a theoretical framework I want to test. Unfortunately, the dataset has only a medium sample size . That is one reason why I first want to check for bivariate correlations in order to get rid of those variables which are already in this step non-signioficantly related to the DV.

              Comment


              • #8
                Andreas:
                I share Marcos' helpfu advice.
                I would not vouch bivariate correlation approach as a way to rule out possibly "redundant" predictors, as its results,by definition, cannot take into account the role palyed by other variables which are not included in bivariate correlation.
                The best approach is regression, driven, in terms of model design and predicotrs,by what others did in the past in your research field when dealing with the same research topic.
                Personally, for most of its applications, I think that linear regression outperforms -anova-, too.
                Kind regards,
                Carlo
                (Stata 19.0)

                Comment


                • #9
                  Carlo, thank you again for your great help. I am taking your (and Marcos') statistic-methodological advice.

                  Best,
                  Andreas

                  Comment

                  Working...
                  X