Announcement

Collapse
No announcement yet.
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • risk ratio versus risk difference in meta analysis

    Hi,
    I conducted a meta-analysis including two RCTs using random effects model (DerSimonian-Laird) using Stata 16.
    The summary relative risk came as 4.35 with 95% Confidence intervals of 0.26 and 75.10 and p value was 0.307, which indicated that the results were statistically significant.
    Using the same data, when I repeated the meta-analysis using risk difference (instead of relative risk) as a measure of summary effect size, the results became statistically significant: RD: 0.443, 95% CI: 0.053 and 0.833; p value 0.026.
    In effect, depending on the type of summary estimate, the results from the same data are becoming either statistically significant or insignificant.
    I would appreciate if someone would explain the reason behind these diverging results. I would also be grateful for advice as to which one should be used to present the results of this meta-analysis. I am enclosing stata dataset used for this meta-analysis.
    Regards
    Attached Files

  • #2
    just a correction to the above post. When relative risk was used, the results were not statistically significant. When risk difference was used, the results were statistically significant.

    Comment


    • #3
      The short answer is that data can be analysed in many different ways, and each analysis choice may lead to different numerical results with different associated confidence intervals. In the context of RRs vs RDs, you would expect the results to be consistent, although there is no reason that both should be statistically significant or non-significant at a particular level. And indeed, you have found a RR of 4.35 and a RD of 0.443, which are both large effect sizes in the same direction of effect. I notice also that both of your studies are small, which makes it more likely this sort of thing to happen.

      The technical term for a specific choice of thing to measure is "estimand". For instance, RR is a relative measure, whereas RD is an absolute measure, and technically they are telling you different things about your data. Ideally, prior to analysis you would choose which estimand to prioritise in your report and interpretation.

      Finally, I would just note that, with only two studies, a random-effects model may not be suitable, as theoretically it is estimating the parameters of a Normal distribution of observed effects, which is tricky with only two observations! Personally, I would not choose to use a random-effects model in this scenario, although I am aware it is often done.

      I hope this is helpful.
      BW,
      David.


      Comment


      • #4
        Hi David,
        Thank you for the response and the nice explanation. Interestingly, when I tried fixed effect model, both RR and RD come up as statistically significant. In the past, whenever we have used FEM, peer reviewers have criticized us heavily, especially when there was significant statistical heterogeneity. I will read more about these models to ensure appropriate ones are used.
        Best regards
        Shripad

        Comment

        Working...
        X