Announcement

Collapse
No announcement yet.
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Different results for meta-analysis with built-in meta and older package 'metan'

    Hello,

    I have difficulties with performing a meta-analysis. I have run a meta-analysis with risk ratios with built-in meta and with older package 'metan'. The results are different. I use random-effects model with DerSimonian-Laird to estimate between-study variance in both strategies. Here are my scripts:

    1) 'metan' package:

    meta summarize, random(dlaird) invvariance
    gen lnor = ln(RR)
    gen lnuci = ln(Upperlimit95)
    gen lnlci = ln(Lowerlimit95)
    metan lnor lnlci lnuci, eform effect(RR) random lcols(Publication)

    2) built-in meta-analysis in Stata 16.0

    meta set RR SE, random(dlaird) studylabel(Publication) studysize(Nsize)
    meta summarize, random(dlaird)

    (if I use lnRR and lnSE, stata does not perform the analysis because the values aren't positive)

    I can see from the results that the weights are different.

    If I run the built-in meta-analysis with e^RR and e^SE, the pooled estimate is bearly the same but confidence intervals very large with very different weights for each study.

    So my question is: which result is the right? And how can I perform built-in meta-analysis in Stata 16.0 with risk ratios in the right way? I can't find any examples from Stata guides.











  • #2
    Are you certain that the standard error SE in your data is the standard error of the odds ratio, or is it the standard error of the coefficient estimate? Note for example in Stata that if you run logit and then run logistic on the same data, the results table for logistic replaces the coefficients with odds ratios, but the standard error is the same as in logit.

    Based on this, I think you would want to run meta with lnRR and SE. I don't think a negative effect is what caused your error message, but a negative standard error would be a problem.

    Comment


    • #3
      Hi Paula,

      I suspect the problem is the following line:
      Code:
      meta set RR SE, random(dlaird) studylabel(Publication) studysize(Nsize)
      It is difficult to be sure without better knowledge of your data (and, in particular, precisely what data is stored under what variable name and how that data was derived -- c.f. William's comment above), but assuming RR contains an exponentiated Relative Risk then this line is incorrect. meta set (and, indeed any of the meta-analysis commands you are using) requires ratio statistics to be provided on the log scale -- so, for example, the logRR and it's standard error. (but note that "SElogRR" would not typically be derived as the logarithm of another quantity!)

      Hence, as William says, we need to explore what is meant by "if I use lnRR and lnSE, Stata does not perform the analysis because the values aren't positive". The best way to do this is to make your dataset available via the dataex command; or at the very least, to provide us with some sense of what data is contained in each column.

      Another potential source of (unnecessary) confusion is that in Section 1 you are using confidence limits, but in Section 2 you are using a standard error. Ideally, to compare between commands, you should supply them both with the same data -- and as far as I can tell, there is nothing to prevent you from doing this. (Furthermore, the line gen lnor = ln(RR) appears to be confusing odds ratios with risk ratios, although since we are not working with the underlying raw data, this is probably of secondary concern only.)

      Finally: I note that in your Section 1, you use meta summarize prior to running metan. I can't see why you are doing this -- in particular, you don't seem to be using any of the output from meta summarize thereafter. This may simply be a copy-and-paste error or similar -- but if not, could you please explain this line?

      Thanks and best wishes,

      David.

      Comment


      • #4
        Thank you very much for your answers. I have still trouble understanding why I get different results with metan and built-in-meta-analysis.

        I tried to make an example of my dataset available via dataex command. I tried to name the file Paula mydata. This command is new to me and I am not sure if I was able to do it correctly.

        RR does not contain exponentiated Relative Risk, it contains just Relative Risk. When I use exponentiated forms e^RR and e^SE I get different results with built-in meta-analysis and metan package.

        I have counted the standard error from the confidence intervals in excel. I thought, they (confidence interval and standard error counted from it) should work in similar manner.

        I am dealing with relative risks all the time. I'm sorry for my line gen lnor = ln(RR), I have only named the new variable in a misconfusing way. I am sorry for that.

        I am sorry for the first line in Section 1. That line was a mistake. I have not run meta summarize before metan. The correct Section 1 should be:

        gen lnrr = ln(RR)
        gen lnuci = ln(Upperlimit95)
        gen lnlci = ln(Lowerlimit95)
        metan lnrr lnlci lnuci, eform effect(RR) random lcols(Publication)

        I hope you can help me more now after I have used the dataex command.

        Thank you greatly,
        Paula

        Comment


        • #5
          Hi Paula,

          I'm sorry but I don't see any data in your post. Please see help dataex for a full explanation of the purpose and syntax of this command.

          Thankyou for your other clarifications. To be clear: when I say "exponentiated RR", I mean "not logRR, the logarithmic transform of RR, but the actual ratio statistic RR itself". Sorry for the confusion there. There is certainly no reason ever to exponentiate the ratio statistic!!

          Putting everything you've said together (and in the absence of data), I'm going to assume that your metan sequence of commands is correct -- it certainly looks sensible. That being the case, the equivalent set of commands for "built in meta" ought to be:
          Code:
          meta set lnrr lnlci lnuci, random(dlaird) studylabel(Publication) studysize(Nsize) eslabel(RR)
          meta summarize, eform
          Please run these commands; and if there are still differences in results, please explain. (Ideally, cut-and-paste the output into your reply -- unless the data are strictly confidential or similar)

          Best wishes,

          David.

          Comment


          • #6
            Hello David,

            I had trouble understanding how I operate with dataex. Here is my dataex code (below) of an example data, I hope I got it right now! Although it is unnecessary now, I think, because the commands you sent worked already.

            Thank you for your clarification of the term "exponentiated RR". It really clarified the issue, because of my literal interpretation of the term that I have struggling with also earlier.

            And especially a large thank you the commands. It appeared I also needed "civartol" option, but now the built-in meta-analysis works and I get the same results with "metan" and built-in meta-analysis! This was an immense help to me, thank you so much!

            Thank you so much and best wishes,
            Paula

            Code:
            * Example generated by -dataex-. To install: ssc install dataex
            clear
            input str14 Publicationname double(p0 RR lnRR Lowerlimit Upperlimit) int Nsize double(SE lnSE)
            "Publication 1"   .146770107663    .99498743710662 2.7046903623293677 .8944271909999159 1.1090536506409416 1135 .054867056211547936 1.0564001634997884
            "Publication 2"   .146770107663  .8888194417315589 2.4322565351195364 .7874007874011811  1.004987562112089 3119  .06224312905563363 1.0642210563370174
            "Publication 3"  .7799511002444 1.0816653826391966   2.94958765506908 .9273618495495703 1.2649110640673518 1636  .07918705599238766  1.082406773808061
            "Publication 4"           .0019   .820140324899743 2.2708184675498138 .5302368655753305 1.2696740942687081   94  .22275298595551682 1.2495118886275352
            "Publication 5"           .0019 1.3096140362721023  3.704743539603127 .8900930704481309 1.9183218349336666   94  .19588775707168848 1.2163903664549969
            "Publication 6"           .0019  .9400536160052957  2.560118677986178 .6302216373570073    1.3994677976862   94   .2035142183609885 1.2257025849991305
            "Publication 7"           .0019  .7501782661665118 2.1173774397300282 .4802373485411261 1.1698109455862256   94  .22712167651792292 1.2549825605585607
            "Publication 8"         .189349 1.0816653826391966   2.94958765506908 .9327379053088815 1.2569805089976536  169  .07610802479239584 1.0790791351609859
            "Publication 9"   .178775510204  .7874007874011811 2.1976767670299036 .6403124237432849  .9591663046625439 1225  .10308884060519548 1.1085898924982422
            "Publication 10" .0593974175035   .594839408368282 1.8127398107398356 .3640555746242828  .9528297814168293 1074  .24544126039932823 1.2781852012973345
            end

            Comment


            • #7
              Originally posted by William Lisowski View Post
              Are you certain that the standard error SE in your data is the standard error of the odds ratio, or is it the standard error of the coefficient estimate? Note for example in Stata that if you run logit and then run logistic on the same data, the results table for logistic replaces the coefficients with odds ratios, but the standard error is the same as in logit.

              Based on this, I think you would want to run meta with lnRR and SE. I don't think a negative effect is what caused your error message, but a negative standard error would be a problem.
              Hi William,

              Is there a way to replace Std.Err. from the coefficient estimate with Std. Err. from odds ratios?
              I am using command coefplot to report all the result in a figure instead of a table. but as you said I noticed the Std. Err. in the parentheses comes from coefficient estimate.

              Wish you all the best,
              Thanks!

              Comment


              • #8
                The statement in post #2 quoted in post #7 regarding logistic regression standard errors for odds ratios was incorrect. The standard errors reported with odds ratios are indeed the appropriate standard errors; it is the confidence interval which is calculated, not using those standard errors, but by exponentiating the confidence interval for the coefficient estimate, in the same way that the estimated odds ratio is calculated by exponentiating the estimated coefficient.

                The question in post #7 was also asked at in the discussion at

                https://www.statalist.org/forums/for...ficance-levels

                and has been addressed there.

                Comment

                Working...
                X