Announcement

Collapse
No announcement yet.
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Clinical Trials: T-Tests or Contrasts? What do you think?

    Thanks in advance. I am a new STATA user but my question is of more general.

    I know very well this is a trivial question but I need to understand:
    I am analyzing a trial and I have 3 groups: A, B and C, the outcome is continuous.

    I'm interested in comparisons A vs B and A vs C.
    I did two t-tests directly and corrected them via bonferroni.

    As a second strategy I did the ANOVA and used the "contrasts" command to evaluate the same hypotheses. I obviously got two different results because the contrasts use the variability of all three groups.

    I have always asked myself:
    1) which of the two approaches is preferable in a trial like mine? Theoretically they should answer the exact same question. Are there any criteria for deciding?

    2) should the p values obtained with the contrasts be corrected for multiplicity? As far as I'm concerned, pre-planned contrasts should NOT be corrected, but I have found a couple of authors who say otherwise.

    What do you think?

  • #2
    Originally posted by Gianfranco Di Gennaro View Post
    1) . . . Are there any criteria for deciding?
    The only criterion that I'm aware of in the context of a clinical trial is that you need to stick to the protocol or statistical analysis plan (SAP).

    There are some who advocate the use of paired t-tests for individual comparisons in lieu of some kind of Satterthwaite-like weighted averaging of the variance estimates from a repeated-measures ANOVA. This has to do with the latter's sensitivity to violations of the sphericity assumption. See for example here (scroll down to the post by David Nichols). I think that, with modern routines to fit linear mixed models, this is probably less acute than when all that was available was least-squares fitting of ANOVA models.

    Analogously, you can accommodate heteroscedasticity with t-tests (Welch's or Satterthwaite's approximation) that wouldn't be possible with conventional ANOVA. But, here, again, with modern regression-modeling software, heteroscedasticity can be accommodated in an omnibus model. See here for possibilities to look further into that recently came up on this list.

    2) should the p values obtained with the contrasts be corrected for multiplicity? As far as I'm concerned, pre-planned contrasts should NOT be corrected, but I have found a couple of authors who say otherwise.
    I'm not sure that preplanning or even postplanning has much to do with the decision whether to adjust for multiple comparisons. They seem orthogonal to me. (Once the decision is made, there are different adjustment approaches touted for so-called a priori and post hoc testing.)

    If Group A is a control treatment group, then you can avail yourself to more powerful approaches to multiple comparison than Bonferroni's method, such as Dunnett's t. And even if not there are other approaches to preserving power while "correcting for" multiplicity.

    Also, I am reminded of Bruce Weaver's bringing up on this list the proposal not to adjust for multiple contrasts in the context of Fisher's least significant difference (LSD) testing with only three groups once the omnibus test proved positive. See here for his discussion and some literature references relating to it.

    Comment


    • #3
      Originally posted by Joseph Coveney View Post
      Also, I am reminded of Bruce Weaver's bringing up on this list the proposal not to adjust for multiple contrasts in the context of Fisher's least significant difference (LSD) testing with only three groups once the omnibus test proved positive. See here for his discussion and some literature references relating to it.
      Thanks Joseph Coveney, you beat me to it!
      --
      Bruce Weaver
      Email: [email protected]
      Version: Stata/MP 18.5 (Windows)

      Comment

      Working...
      X