Announcement

Collapse
No announcement yet.
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Question regarding confidence intervals when running meta analysis in STATA

    Hi all,

    I have been trying to run a meta-analysis in STATA. I input all the effect estimates (hazard ratios) and their exact 95% CI into an Excel spreadsheet, inputted the data into STATA, and have been running commands on the data successfully.

    However, I have one problem. For confidence intervals that are not symmetric, I cannot get the exact confidence intervals from the respective papers to match up to the confidence intervals derived when running
    Code:
    meta summarize
    . When rounding the confidence intervals in meta summarize to 2DP, some are off by .01, but some are off by .02. I feel very uncomfortable with this. I have tried various ways of meta setting the data, including taking logs of the hazard ratio and 95% CIs (and then transforming back using eform), as well as declaring the confidence intervals (log) when meta setting instead of declaring the standard error.

    Is there a way to force STATA to give me the exact confidence intervals from the respective papers somehow?

    Any advice would be much appreciated.

    Thanks, Oliver

  • #2
    Oliver, hi.

    If you want to plot the hazard ratios with no interest in the summary estimate, you can use:

    Code:
    meta set loghr lower upper, civartolerance(1) studylabel(whatever)
    loghr denotes log-hazard ratio and lower and upper the lower and upper bounds of the 95% CI (everything should be on a log scale). The summary estimate should be ignored.

    If you want to use those estimates to obtain a meaningful summary estimate, the limits should be symmetric and the option civartolerance(1) avoided at all costs.

    I am unsure why you have asymmetric bounds for the hazard ratio. Data extraction issues? If you cannot find the source for that issue, you can calculate a conservative estimate for the standard error as
    Code:
    gen seloghr = (upper-lower)/(2*invnorm(0.975))
    and use

    Code:
    meta set loghr seloghr
    to ensure the 95% limits are symmetric.

    Comment


    • #3
      Originally posted by Tiago Pereira View Post
      Oliver, hi.

      If you want to plot the hazard ratios with no interest in the summary estimate, you can use:

      Code:
      meta set loghr lower upper, civartolerance(1) studylabel(whatever)
      loghr denotes log-hazard ratio and lower and upper the lower and upper bounds of the 95% CI (everything should be on a log scale). The summary estimate should be ignored.

      If you want to use those estimates to obtain a meaningful summary estimate, the limits should be symmetric and the option civartolerance(1) avoided at all costs.

      I am unsure why you have asymmetric bounds for the hazard ratio. Data extraction issues? If you cannot find the source for that issue, you can calculate a conservative estimate for the standard error as
      Code:
      gen seloghr = (upper-lower)/(2*invnorm(0.975))
      and use

      Code:
      meta set loghr seloghr
      to ensure the 95% limits are symmetric.
      Hi Tiago,

      Thank you very much for your reply, I appreciate it. Even when using civartolerance(1), the CI do not end up being symmetric when I run meta summarize.

      One of the HRs I have (for example) is 0.41, 95% CI 0.15-1.09. When I use your lower bit of code to meta set (which I had already tried), the resulting HR once I run meta summarize is 0.15-1.11, so the upper limit is off by .02. Further, the CI still isn't symmetric around the effect estimate.

      Is this the best we can do? Is it acceptable to present the 'new' CI (0.15-1.11) in a paper or in my thesis, given that the upper limit is off by .02?

      Thanks again for your help, Oliver

      Comment


      • #4
        Would you like to know the summary estimate, or are you primarily interested in visualizing the estimate through plotting?

        Comment


        • #5
          Originally posted by Tiago Pereira View Post
          Would you like to know the summary estimate, or are you primarily interested in visualizing the estimate through plotting?
          Hi Tiago,

          I need the summary estimate. The purpose of the meta analysis is to summarise the observational studies in my field, but as I said, I am quite uncomfortable with the resulting CIs being off by .01/.02. Someone at my university suggested using the t distribution instead of the normal distribution, with the resulting code being meta summarize, tdistribution. However, I don't know if this will give the exact CIs as the original research either.

          Again, I would appreciate your thoughts.

          Oliver

          Comment


          • #6
            I see. Approximate the standard error of the log-HR as suggested above. If the final 95% confidence interval is larger than the original one, your approach will be conservative and will penalize studies that did not report their estimates with enough precision (i.e., they will contribute slightly less because the approximate standard error is slightly larger).

            the -tdistribution- only applies to the summary estimates. The 95% CIs for the primary studies are still calculated using a normal distribution.

            Comment


            • #7
              Originally posted by Tiago Pereira View Post
              I see. Approximate the standard error of the log-HR as suggested above. If the final 95% confidence interval is larger than the original one, your approach will be conservative and will penalize studies that did not report their estimates with enough precision (i.e., they will contribute slightly less because the approximate standard error is slightly larger).

              the -tdistribution- only applies to the summary estimates. The 95% CIs for the primary studies are still calculated using a normal distribution.
              Thanks again. In the meta analysis literature, is it acceptable to have confidence intervals included in the studies contributing to the summary estimate with confidence limits off by .01/.02? Should I report the 'new' conservative confidence limits in a paper/in my thesis as opposed to the confidence intervals derived directly from the paper?

              Comment


              • #8
                You can try to get more precise 95% CIs if you have P-values or log-rank tests. If you have only 0.41, 95% CI 0.15-1.09, then it is fine to approximate the standard error of the log-HR using a normal distribution, and reporting the recalculated 95% CI as 0.15-1.11. It is an approximation error - it is good enough - provided that your re-calculated estimates are slightly more conservative.

                Comment


                • #9
                  Originally posted by Tiago Pereira View Post
                  You can try to get more precise 95% CIs if you have P-values or log-rank tests. If you have only 0.41, 95% CI 0.15-1.09, then it is fine to approximate the standard error of the log-HR using a normal distribution, and reporting the recalculated 95% CI as 0.15-1.11. It is an approximation error - it is good enough - provided that your re-calculated estimates are slightly more conservative.
                  Thanks again. Would I need p-values for every single study contributing to the summary estimate for them to be useful? What would the code be to use p-values as well?

                  Oliver

                  Comment


                  • #10
                    If you have a two-sided P-value from a Cox model or another model that assumes a normal distribution for the coefficients, and it is reported with sufficient precision (e.g., P = 0.08), you can approximate the standard error of the log-hazard ratio using the inverse normal distribution as follows:


                    Code:
                    abs( ln(hr)/invnorm(1-p/2))
                    where hr is the hazard ratio estimate and p denotes the two-sided p-value.
                    Last edited by Tiago Pereira; 05 Jul 2023, 09:30.

                    Comment

                    Working...
                    X