Announcement

Collapse
No announcement yet.
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Contrast tests in repeated-measures ANOVA

    Dear members of Stata forum,

    I need some help on contrast tests after fitting a repeated-measures ANOVA model in my research. I'm using Stata 17.

    I fitted a one-way repeated-measures ANOVA model by using the following code:

    Code:
    anova em bcorpalt / id|bcorpalt emtype, repeated(emtype)
    where my between-subject effect is bcorpalt and within-subject effect is emtype.

    The results show that the between-subject effect, bcorpalt, is significant. So after fitting the ANOVA model, I am figuring out how to perform post-hoc contrast tests. Right now, I have tried the following two options of code:

    Code:
    contrast ar.emtype, effects
    Code:
    margins emtype, at (bcorpalt=1) pwcompare (effects) noestimcheck
    Any thoughts on which of the above code is more appropriate in my case? I appreciate your help in advance.

    One more note. The within-subject effect, emtype, is also significant in my fitted model.

    Best,
    Jiahui
    Last edited by Jiahui Lu; 24 Aug 2022, 14:14.

  • #2
    Originally posted by Jiahui Lu View Post
    Any thoughts on which of the above code is more appropriate in my case?
    Well, the only difference between the two is that the second gives you pairwise comparisons between all levels of your repeated-measurements factor and the first gives a subset of those, namely, linearly independent adjacent contrasts ascending through levels of the factor. So, the answer to your question depends upon whether one or the other fits your research objective better.

    A few observations:

    1. Your ANOVA model does not include a term for the bcorpalt × emtype interaction
    Code:
    anova em bcorpalt / id|bcorpalt emtype bcorpalt#emtype
    That's not necessarily a problem if you have reason to believe that such an interaction of the two factors is not possible, or not of interest and not able to undermine your interpretation. But it does mean that you don't need the at(bcorpalt=1) option in your margins command—it won't have any effect.

    2. I recommend using the emptycells(reweight) option instead of noestimcheck in your margins command.

    3. If you have more than three levels of your within-subject factor, then you might want to consider alternatives to the noadjust default for the implied mcompare() option in both contrast and margins.

    4. The adjustments that are annexed to the output of anova as a result of the repeated() option I believe are not carried through into either of the postestimation commands, and so if you have substantial adjustments there, especially for either the Huynh-Feldt epsilon or Greenhouse-Geisser epsilon, then you might want to consider fitting a MANOVA model to your data. (With an outcome variable named em and the within-group factor named emtype, it doesn't sound as if your repeated measurements are of a temporal nature, and so you probably don't have the autocorrelation that is the usual culprit here. Did your study design need to address the possibility of so-called learning effects, perhaps, by balancing of the order of exposure to the levels of the emtype factor between your subjects? If so, then it might be worth considering including the assigned sequence as another between-subjects factor.)

    I fitted a one-way repeated-measures ANOVA model
    5. I've seen the corresponding model (with the interaction term) more commonly described as a two-way factorial ANOVA with repeated measures on one factor.

    Comment


    • #3
      Hi Joseph,

      Thanks for your observations with comments. You mentioned this
      ...the only difference between the two is that the second gives you pairwise comparisons between all levels of your repeated-measurements factor and the first gives a subset of those...
      We compared the mean of our outcome variable (i.e., em) between three levels of emtype, our with-subject factor. We're particularly interested in comparing group 2 with group 3. I played around with the code again. The contrast command and the margins command (without the at(bcorpalt=1) option) gave me two different but close p values (one is 0.15 and the other is 0.17) and contrast values. Does this mean my contrast code is not right? Since our editor asked us to do contrast, I am more leaning towards using the contrast command; I could still do the pariwise comparison within the three groups by running other types of contrast in Stata, such as contrast r. emtype. I believe it is the different results between the two options (contrast and margins) that threw me off.

      Also, for your comments,

      1. Yes-we don't include the interaction in our model because we don't see any difference when the interaction term is included and when it is not. And yes, given the interaction is not our focus, I also agree that we need to remove the at(bcorpalt=1) option in the margins command. Thanks for this suggestion!

      2. We are interested in testing the difference in means of em between the three levels of our within-subject factor, emtype. Could you explain more why you suggested replacing noestimcheck with emptycells(reweight) in the margins command?

      3. It looks like mcompare() option adjusts for multiple comparisons. We have three levels of the within-subject factor, so are you suggesting the following command in the case of the contrast option?
      Code:
      contrast ar.emtype, effects mcompare()
      I thank again for your help. I just saw your reply to my previous post in July. Sorry for the late reply to that post (I will check that later).

      Comment


      • #4
        Originally posted by Jiahui Lu View Post
        I played around with the code again. The contrast command and the margins command (without the at(bcorpalt=1) option) gave me two different but close p values (one is 0.15 and the other is 0.17) and contrast values. Does this mean my contrast code is not right?
        I don't know of any reason why they wouldn't give you identical values for the difference and its p-value for the contrast between em levels 2 and 3. (I did a quick check on a toy dataset: even with unbalanced data in the between-subjects as well as in the within-subjects factors, and even when specifying asbalanced and asobserved with each, I still got the same contrast value and p-value between the two postestimation commands.) I don't know what the problem there was, but you might want to look into it further.

        Since our editor asked us to do contrast, I am more leaning towards using the contrast command; I could still do the pariwise comparison within the three groups by running other types of contrast in Stata, such as contrast r. emtype. I believe it is the different results between the two options (contrast and margins) that threw me off.
        I don't know what your variables represent, but from the similarity of the names of the response variable em and the within-subject factor emtype, it sounds as if the latter is a qualitative characteristic of the responses that the ids emit, and not a treatment or intervention that is applied to the ids at discrete times or locations.

        That is, it's as if the ids emit, say, blue ems, red ems and green ems simultaneously and emtype is the variable in your regression model that you use in order to distinguish between these three types of em emitted from ids that are each subjected to one type or intensity level of the bcorpalt intervention. If that's anywhere close to your case, then just as a matter of preference or habit, I would have looked into MANOVA first instead of repeated-measures ANOVA with the repeated() option.

        Regardless, with the word type as part of the variable name, it sounds more likely that your emtype is nominal and not ordinal, and so in any case, contrast ar.emtype wouldn't be the contrast that I would use as a first choice without other considerations, as it really is more of interest for a predictor that has ascending (ordered) levels. Did you choose it because it gives you the contrast (group 3 versus group 2) that is of particular interest?

        Could you explain more why you suggested replacing noestimcheck with emptycells(reweight) in the margins command?
        Informally based on caution in turning off checks, and that it's appropriate for making contrasts in at least some designs that involve nesting. See the section "Obtaining margins with nested designs" in the user's manual entry for margins.

        We have three levels of the within-subject factor, so are you suggesting the following command in the case of the contrast option?
        My comment was for the situation where you have more than three levels of the factor. See Bruce Weaver's discussion and literature references in this thread for when you have three levels and are able to reject the null hypothesis for the omnibus (overall) test, which is your case.

        I just saw your reply to my previous post in July. Sorry for the late reply to that post (I will check that later).
        I'll have to go back and read what's there, hoping that I haven't contradicted myself.
        Last edited by Joseph Coveney; 26 Aug 2022, 00:25.

        Comment


        • #5
          Hi Joseph,

          I don't know of any reason why they wouldn't give you identical values for the difference and its p-value for the contrast between em levels 2 and 3. (I did a quick check on a toy dataset: even with unbalanced data in the between-subjects as well as in the within-subjects factors, and even when specifying asbalanced and asobserved with each, I still got the same contrast value and p-value between the two postestimation commands.) I don't know what the problem there was, but you might want to look into it further.
          I looked into this further and finally got the same p and contrast values by using both types of commands.

          That is, it's as if the ids emit, say, blue ems, red ems and green ems simultaneously and emtype is the variable in your regression model that you use in order to distinguish between these three types of em emitted from ids that are each subjected to one type or intensity level of the bcorpalt intervention. If that's anywhere close to your case, then just as a matter of preference or habit, I would have looked into MANOVA first instead of repeated-measures ANOVA with the repeated() option.
          Yes, I think your description is very close. So, I also looked into MANOVA and it looks like this model may not be appropriate for our data because the correlations among my dependent variables are not moderately strong.

          Regardless, with the word type as part of the variable name, it sounds more likely that your emtype is nominal and not ordinal, and so in any case, contrast ar.emtype wouldn't be the contrast that I would use as a first choice without other considerations, as it really is more of interest for a predictor that has ascending (ordered) levels. Did you choose it because it gives you the contrast (group 3 versus group 2) that is of particular interest?
          Yes-we chose contrast ar.emtype because the command gives us the contrast we're interested in the most. I haven't found any other command (besides the contrast and margins command) that would fulfill our goal so far...

          I thank you for your prompt help again!

          Comment


          • #6
            Hello there,

            I think we figured out a way to produce the contrast we aim for testing:

            Code:
            anova em bcorpalt / id|bcorpalt emtype bcorpalt#emtype, repeated(emtype)
            However, we are confused with one of our results produced by the above command mainly because it contradicts with the results we obtained previously. We think more confirmation of the result is needed and helpful. After running the first command I quoted in this post, we ran the following contrast command and wanted to confirm whether the p-value produced by the following code is one-tailed or two-tailed?

            Code:
            contrast ar.bcorpalt@emtype, emptycells(reweight) effects mcompare()
            For the first command, the between-subject effect is bcorpalt (2 treatment groups) and within-subject effect is emtype (3 levels).

            Thank you very much for your help.


            Comment


            • #7
              Originally posted by Jiahui Lu View Post
              we. . . wanted to confirm whether the p-value produced by the following code is one-tailed or two-tailed?
              The header over the p-values is labeled "P>|t|". That signifies a two-tailed test.

              Comment


              • #8
                Thanks for your reply, Joseph. It's useful information. Based on the results I have, I'm figuring out a repeated-measures ANCOVA model for the same research question. I'm still mostly interested in the contrast comparing the effect of bcorpalt at each level of emtype. I want to add 3 covariates to my repeated-measures ANOVA model: meetimport csrimport, and experience. I read some of your previous posts on repeated-measures ANCOVA (https://www.statalist.org/forums/for...easures-ancova) and ran the following repeated-measures ANCOVA.

                Code:
                anova em meetimport csrimport bcorpalt / id|bcorpalt meetimport csrimport emtype bcorpalt#emtype, repeated(emtype) bse(id | bcorpalt)
                My first question is if the above command fits the ANCOVA model I aim for; that is, an ANCOVA model with the between-subject effect bcorpalt (2 treatment groups) and the within-subject effect emtype (3 levels) while controlling for meetimport, csrimport, and experience?

                Since I am interested in testing the effect of bcorpalt at each level of emtype, I ran the following contrast as I did in my repeated-measures ANOVA model right after fitting the repeated-measures ANCOVA.

                Code:
                contrast ar.bcorpalt@emtype, emptycells(reweight) effects mcompare()
                Unfortunately, the command does return with results, showing "not testable" for all the contrasts I requested.

                Then, I tried some margins command and the following command gave me some results.

                Code:
                margins r.bcorpalt, contrast at(emtype=(1/3)) noestimcheck
                Thus, my second question is if you could help discern if the above command does address my test goal, which is testing the effect of bcorpalt at each level of emtype?

                Many thanks!

                Comment


                • #9
                  Originally posted by Jiahui Lu View Post
                  My first question is if the above command fits the ANCOVA model I aim for; that is, an ANCOVA model with the between-subject effect bcorpalt (2 treatment groups) and the within-subject effect emtype (3 levels) while controlling for meetimport, csrimport, and experience?
                  I don't really understand your model, for example, the two new predictors and why they're in your model twice, before and after the between-subjects error term for bcorpalt.

                  More important, it looks as if your research question is of an economic nature (csimport, meetimport), and so you might want to use modeling approaches more like what your colleagues would use under these circumstances instead of repeated measures ANCOVA.

                  my second question is if you could help discern if the above command does address my test goal, which is testing the effect of bcorpalt at each level of emtype?
                  Because I don't really understand your model, I would not hazard a guess as to whether a corresponding postestimation command is doing what you hope that does. Again, if this is an economic research question, then maybe the methods that economists use under these circumstances will better accommodate the issues that you face.

                  Comment


                  • #10
                    Hi Joseph,

                    I don't really understand your model, for example, the two new predictors and why they're in your model twice, before and after the between-subjects error term for bcorpalt.
                    Yes-that's my confusion, as well. I guess my question is where to put my covariates in my repeated-measures ANCOVA model (Did I put them in the right place in the command?). I fitted the original repeated-measures ANCOVA based on the following command I found from a post in the Stata forum. It could be things have changed since the post is so old.
                    Code:
                    anova y x_sum a / individual | a x b a*b, continuous(x x_sum) /// repeated(b) bse(individual | a)
                    And, if I deleted the covariates after the between-subjects error term (the command below), the results are the same as those from my original ANCOVA command. Does the following one make sense a bit more?
                    Code:
                    anova em meetimport csrimport experience bcorpalt / id|bcorpalt emtype bcorpalt#emtype, repeated(emtype) bse(id | bcorpalt)
                    More important, it looks as if your research question is of an economic nature (csimport, meetimport), and so you might want to use modeling approaches more like what your colleagues would use under these circumstances instead of repeated measures ANCOVA.
                    I think our research question is not of an economic nature. csrimport and meetimport are more about individuals' perceptions of some behaviors in organizations.

                    Nevertheless, I do have confidence in the code (the following one) for a repeated-measures ANOVA with one between-subjects factor and one within-subjects factor. I think there is a way to add covariates to it, but I'm not 100% sure of the code I come up with. Again, I appreciate your attention.
                    Code:
                    anova em bcorpalt / id|bcorpalt emtype bcorpalt#emtype, repeated(emtype)

                    Comment

                    Working...
                    X