Announcement

Collapse
No announcement yet.
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Sample Size Calculation

    Needing help with a sample size calculation:

    I am conducting a single arm clinical study where we will be treating patients with diabetic retinopathy and measuring their severity scores pre and post treatment. We will be analyzing them to see if they had a step down in their disease severity (therefore, a dichotomous outcome - yes/no).

    I know from past studies with a similar treatment that I should expect 41% of patients to have a decrease in their disease severity scores if the treatment is effective. I am shooting for allowances of: type I error of 0.05, 90% power, and 10% drop-out rate.

    I read a lot about paired proportions and marginal proportions, but I am not sure if what I have presented here from the past literature is enough to compute this.

    Appreciate any help.

  • #2
    I don't understand the problem.

    Comment


    • #3
      I'm trying to figure out the best sample size formula to use in this particular scenario to estimate the number of patients needed to demonstrate a clinical effect if there is one.

      Comment


      • #4
        I’ll start by saying I don’t really understand the context of what you’re doing so I’ll suggest some general points to consider.

        Clinical efficacy is a question of comparison to a control. You don’t have that with a single arm study.

        That said, you mention measuring severity scores which might be able to be analyzed as a continuous or ordinal measure. I can’t answer that for you, but it’s worth considering as they will respectively have greater power than any dichotomous version of change.

        There’s another issue to consider here. That is what do you mean by power? More concretely, what is the specific hypothesis you wish to test. Commonly one seeks power to test a difference from some null value of no effect. Here they would be 0% and it would be completely uninteresting, in my opinion, since you’re likely to have some people change by chance. So you could decide that you need some assurance that the percentage changed is at a minimum some value x%, or you might decide you want to know with what precision you need your estimate to have (that is, how wide is your confidence interval). Either of these would be directions I would consider, but again,I don’t really know what’s appropriate for you.

        Comment


        • #5
          Originally posted by has mas View Post
          I know from past studies with a similar treatment that I should expect 41% of patients to have a decrease in their disease severity scores if the treatment is effective. I am shooting for allowances of: type I error of 0.05, 90% power, and 10% drop-out rate.
          You don't say what the expected rate of severity-score decrease is when the treatment is ineffective, but let's say for the sake of argument that it's one-quarter of the patients. Then you could do something like the following.
          Code:
          power oneproportion 0.25 0.41, nfractional power(0.9) onesided continuity
          You could also use simulation, which would avail you to alternative methods of constructing confidence bounds. In the example below, I've used the more conservative of those available.
          Code:
          version 18.0
          
          clear *
          
          // seedem
          set seed 496646635
          
          // Direct
          power oneproportion 0.25 0.41, nfractional power(0.9) onesided continuity
          
          // Via simulation (convservative Clopper-Pearson CI)
          program define generatEm, rclass
              version 18.0
              syntax , [n(integer 77) p0(real 0.25)]
              
              tempname successes
              scalar define `successes' = rbinomial(`n', 0.41)
              
              quietly cii proportion `n' `successes', level(90)
              return scalar positive = `p0' <= r(lb)
          end
          
          program define simulatEm
              version 18.0
              syntax , [Ens(string) Reps(integer 3000) Opts(passthru)]
          
              foreach n of numlist `ens' {
          
                  quietly simulate positive = r(positive), reps(`reps'): ///
                      generatEm , n(`n') `opts'
          
                  assert !missing(positive)
                  summarize positive, meanonly
                  display in smcl as text _newline(1) "N = `n' Power = " ///
                      as result %04.2f r(mean)
              }
          end
          
          simulatEm , e(60(10)90)
          
          exit
          The official Stata power command gives a sample size of 76.5305 for your 90% power and 5% level of Type I error rate for a one-sided hypothesis. The simulation approach above gives broad agreement with that, as in the following output

          N = 60 Power = 0.78

          N = 70 Power = 0.84

          N = 80 Power = 0.92

          N = 90 Power = 0.94


          Be sure to check my computations, but I get a sample size of 76 and change, and with a 10% dropout rate the study would call for about 85 patients.

          Comment


          • #6
            This is exactly what I was looking for. Thank you!!

            Comment

            Working...
            X