Announcement

Collapse
No announcement yet.
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Effect size with 4 treatments

    Hi there,

    Hope you are well. I need a help with understanding the best way to calculate effect size of my data. I have 320 observations and 4 treatments. I have never really done this before but I need to conduct this in response to a reviewer's comment. I have come across several ways to calculate effect size. The first method I have used is:

    power oneway, n(320) power(0.8) ngroups(4)

    Another method I read was using the command esize. However, I am unable to use it for my data since I have 4 treatments. Another method is to calculate the standardized (or unstandardized) difference between the two means (e.g., of your outcome variable) across treatments.

    I am not sure what methodology and command will be the most appropriate to use in this context. I will be grateful if you have any suggestions for me.

  • #2
    Just want to add that I have found another command: power twomeans m1

    Comment


    • #3
      The -power- comand is not something you should be after. It is used before a study is designed to decide sample size for a given minimal clinically important difference (MCID) or effect size. You need to be clear what is it you are after. If you are concerned about effect size of a treatment between two groups, then -D family- effect size such as Cohen's-D or Hedge's g is appropriate. If you are interested in overall variance being explained by the treatments, then -r family- effect size such as -eta square- is appropriate. There are different ways to calculate them. Whichever you decide to go for depending on your hypotheses in question, certainly -power- should be out of your list. See the link for different effect sizes.
      Roman

      Comment


      • #4
        Originally posted by Anwesha Bandyopadhyay View Post
        The first method I have used is:

        power oneway, n(320) power(0.8) ngroups(4)
        Why isn't that one satisfactory? If the referee didn't specify any particular effect size measure, then there's no need for overthinking the referee's intention.

        Another method I read was using the command esize. However, I am unable to use it for my data since I have 4 treatments.
        Yes you can. You don't show what command you used to fit the model, but the syntax below shows how to obtain esize effect size estimates after both anova and regress.
        Code:
        version 18.0
        
        clear *
        
        // seedem
        set seed 326052746
        
        quietly set obs 320
        generate byte trt = mod(_n, 4)
        
        generate double out = rnormal()
        
        *
        * Begin here
        *
        
        /* If you used -anova- */
        anova out trt
        
        // Syntax for -power oneway-
        scalar define Var = e(rss) / e(df_r)
        power oneway, n(320) power(0.9) ngroups(4) varerror(`=Var')
        
        // Syntax for -esize-
        esizei `e(df_1)' `e(df_r)' `e(F_1)'
        
        /* If you used -regress- */
        regress out i.trt
        
        // Syntax for -power oneway- is the same as for -anova-
        scalar define Var = e(rss) / e(df_r)
        power oneway, n(320) power(0.9) ngroups(4) varerror(`=Var')
        
        // Syntax for -esize-
        testparm i.trt
        esizei `r(df)' `r(df_r)' `r(F)'
        
        exit

        Comment


        • #5
          Just to clarify that what I wrote in #3 might look like conflicting with #4 but actually they don't. We replied from different context. I indicated to the -power-command you used in #1 or in #2 which are not using your data and can only be used in the design phase of a study. The effect size estimated from those commands are not effect sizes from your data. Joseph's example in #4 with -power- command uses observe data and -power- can be used that way to calculate effect size.
          Roman

          Comment


          • #6
            Originally posted by Roman Mostazir View Post
            Just to clarify that what I wrote in #3 might look like conflicting with #4 but actually they don't. We replied from different context.
            Agreed.

            The documentation in the user's manual under the entry for power oneway (Stata Power, Precision, and Sample-Size Reference Manual Release 18, pp. 359–60) even states that the delta estimate that you get from the command used in this context "corresponds to Cohen’s effect-size measure 𝑓".

            Coincidentally, this same effect size measure, Cohen's 𝑓 (actually, its square, Cohen's 𝑓2), has come up on the list just yesterday in the context of its extension to hierarchical / multilevel linear models.

            Comment


            • #7
              Roman Mostazir and Joseph Coveney Thank you so much for your help. Your advice is extremely helpful. Take care.

              Comment


              • #8
                Originally posted by Anwesha Bandyopadhyay View Post
                Thank you so much for your help.
                On further thought, I'll contravene my own advice about overthinking the referee's intention: I suppose that, with an N of 320, your manuscript already discussed the sample size and power considerations that went into your study and so the referee is asking for the realized effect size and not the minimum detectable effect size that power oneway gives.

                So, instead, try the approach illustrated below. (I show it for anova, but it works identically after the corresponding regress syntax.)
                Code:
                version 18.0
                
                clear *
                
                // seedem
                set seed 1047599493
                
                quietly set obs 320
                generate byte trt = mod(_n, 4)
                
                generate double out = 3.trt + rnormal()
                
                *
                * Begin here
                *
                
                anova out trt
                
                // Minimum effect size detectable
                scalar define Var = e(rss) / e(df_r)
                power oneway, n(320) power(0.9) ngroups(4) varerror(`=Var')
                display in smcl as text "Minimum effect size (as Cohen's f²) = " ///
                    as result %05.3f r(delta)^2
                
                // Realized effect size for the model
                estat esize
                
                foreach estimate in eta2 lb_eta2 ub_eta2 {
                    local f2_estimate : subinstr local estimate "eta2" "f2"
                    tempname `f2_estimate'
                    scalar define ``f2_estimate'' = r(esize)["Model", "`estimate'"]
                    scalar define ``f2_estimate'' = ``f2_estimate'' / (1 - ``f2_estimate'')
                }
                display in smcl as text "Cohen's f² [`c(level)'% conf. interval] = " ///
                    as result %05.3f `f2' " ["  %05.3f `lb_f2' ", "  %05.3f `ub_f2' "]"
                
                exit
                It shows first the minimum effect size detectable given the observed residual variance—which is what I showed at first above in #4—and then the realized effect sizes for the overall ANOVA model, both from the estat esize official Stata command and its transformation to Cohen's f2. I think now that one or another of these latter might be what your referee is after.

                Comment


                • #9
                  Joseph Coveney Thank you so much. Unfortunately, the second part of the command is giving me an error message of invalid syntax. Could you kindly advise me on that? My sincere apologies to bother you. The only change I made in the set of commands was to use regress command instead of anova.
                  Last edited by Anwesha Bandyopadhyay; 31 Oct 2024, 11:01.

                  Comment


                  • #10
                    Originally posted by Anwesha Bandyopadhyay View Post
                    . . . the second part of the command is giving me an error message of invalid syntax. Could you kindly advise me on that? . . . The only change I made in the set of commands was to use regress command instead of anova.
                    I attach a do-file and corresponding log file where I substitute the corresponding regress for anova and show that everything, including the second part, works without any error.

                    You don't show anything (no data, no code, no output showing error message), and so I cannot advise further than to copy-and-paste the code more carefully: the second part does have nested local macro / temporary names that can get tricky if you don't pay particular attention to the markings
                    Attached Files

                    Comment


                    • #11
                      Joseph Coveney Thanks a lot. Next time, I will be show the error message and output. My apologies for that.

                      Comment

                      Working...
                      X