Announcement

Collapse
No announcement yet.
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Hi oy lf

    This is a great question, thanks!

    Since I don't have your specific underlying dataset, I'm going to instead use the PDL1 data in your attachment, which a quick Google informs me is taken from the Supplementary Information (Figure S1) of Liu X et al. Int. J. Cancer 2020; 147: 116–127. doi:10.1002/ijc.32744

    The data can be entered into Stata as follows:
    Code:
    * Example generated by -dataex-. For more info, type help dataex
    clear
    input str26 Disease str23 StudyName str25 Intervention str11 Control int(IntN ControlN) float(HR_pos HRlci_pos HRuci_pos HR_neg HRlci_neg HRuci_neg)
    "Non-small-cell lung cancer" "CheckMate 057, 2017"     "Nivolumab"                 "Docetaxel"   231 224 .43  .3  .63 1.01 .76 1.33
    "Non-small-cell lung cancer" "CheckMate 017, 2017"     "Nivolumab"                 "Docetaxel"   117 108 .53 .31  .89   .7 .47 1.02
    "Non-small-cell lung cancer" "POPLAR, 2016"            "Atezolizumab"              "Docetaxel"   144 143 .54 .33  .89  .84 .53 1.32
    "Non-small-cell lung cancer" "OAK, 2016"               "Atezolizumab"              "Docetaxel"   607 608 .64 .49  .84  .85 .73    1
    "Melanoma"                   "CheckMate 067, 2018 (a)" "Nivolumab"                 "Ipilimumab"  288 277 .65 .42  .99  .64  .5  .82
    "Melanoma"                   "CheckMate 067, 2018 (b)" "Nivolumab plus ipilimumab" "Ipilimumab"  278 277 .56 .35   .9  .54 .42  .69
    "Melanoma"                   "CheckMate 069, 2016"     "Nivolumab plus ipilimumab" "Ipilimumab"   35  83 .78 .26  2.4  .74 .38 1.47
    "Melanoma"                   "CheckMate 066, 2018"     "Nivolumab"                 "Dacarbazine" 120 243 .16 .07  .38  .56 .37  .85
    "Melanoma"                   "CheckMate 037, 2017"     "Nivolumab"                 "CTx"         201 195 .73 .49 1.09 1.15 .82 1.62
    "Head and neck cancer"       "CheckMate 141, 2018"     "Nivolumab"                 "SOC"         172 103  .5  .3  .83  .81 .55 1.21
    end

    We begin by processing the data into a form suitable for meta-analysis; note that the first encode command is simply so that we maintain the same order of studies as in Figure S1 (otherwise they would be re-sorted alphabetically):
    Code:
    label define StudyName_ 1 "CheckMate 057, 2017"
    label define StudyName_ 2 "CheckMate 017, 2017", add
    label define StudyName_ 3 "POPLAR, 2016", add
    label define StudyName_ 4 "OAK, 2016", add
    label define StudyName_ 5 "CheckMate 067, 2018 (a)", add
    label define StudyName_ 6 "CheckMate 067, 2018 (b)", add
    label define StudyName_ 7 "CheckMate 069, 2016", add
    label define StudyName_ 8 "CheckMate 066, 2018", add
    label define StudyName_ 9 "CheckMate 037, 2017", add
    label define StudyName_ 10 "CheckMate 141, 2018", add
    rename StudyName StudyNameStr
    encode StudyNameStr, gen(StudyName) label(StudyName_)
    order StudyName, after(StudyNameStr)
    drop StudyNameStr
    
    rename (HR_pos HRlci_pos HRuci_pos) (HR1 HR1_lci HR1_uci)
    rename (HR_neg HRlci_neg HRuci_neg) (HR2 HR2_lci HR2_uci)
    reshape long HR@ HR@_lci HR@_uci, i(StudyName) j(PDL1)
    
    gen double lnHR = ln(HR)
    gen double lnHR_lci = ln(HR_lci)
    gen double lnHR_uci = ln(HR_uci)

    Now, Figure S1 effectively shows two separate meta-analyses, but presents them as interlaced. So to recreate the figure, we will do the same thing: fit both analyses separately, take the results, and re-sort them so that the PD-L1 positive and negative results alternate.
    Code:
    preserve
        tempfile neg
        metan lnHR lnHR_lci lnHR_uci if PDL1==2, random study(StudyName) by(Disease) lcols(Intervention Control IntN ControlN) nosubgroup nograph clear
        save `neg'
    restore
    metan lnHR lnHR_lci lnHR_uci if PDL1==1, random study(StudyName) by(Disease) lcols(Intervention Control IntN ControlN) nosubgroup nograph clear
    append using `neg', gen(PDL1)
    
    isid _BY _USE _STUDY PDL1, sort miss

    Finally, we need to do a bit of tidying up before we create the final plot:
    Code:
    replace PDL1 = PDL1 + 1
    label define PDL1_ 1 "PD-L1 Positive" 2 "PD-L1 Negative"
    label values PDL1 PDL1_
    
    drop if inlist(_USE, 0, 6) & PDL1==1
    replace PDL1 = . if inlist(_USE, 0, 6)
    
    replace _LABELS = "Pooled estimate in PD-L1 positive at 5% cut-off" if _USE==5 & PDL1==1
    replace _LABELS = "Pooled estimate in PD-L1 negative at 5% cut-off" if _USE==5 & PDL1==2
    
    replace _LABELS = "" if _USE!=5 & PDL1==2
    label variable _LABELS "Study name"
    format %-1s _LABELS
    
    gen IntString = Intervention + " (n=" + strofreal(IntN, "%3.0f") + ")" if !missing(IntN) & PDL1==1
    label variable IntString `""Intervention " "(No. of patients analysed)""'
    format %-1s IntString
    
    gen ControlString = Control + " (n=" + strofreal(ControlN, "%3.0f") + ")" if !missing(ControlN) & PDL1==1
    label variable ControlString `""Control" "(No. of patients analysed)""'
    format %-1s ControlString

    The following line of code will now almost recreate Figure S1:
    Code:
    forestplot, hr nowt plotid(PDL1) lcols(IntString ControlString) xlabel(.5 "0.5" 1 2) range(.0625 2.5) cirange(.08 2.5) ///
        astext(70) textsize(90) boxsca(80) favours(Favours intervention # Favours control, fp(3)) ///
        box1opts(mcolor(edkblue)) box2opts(mcolor(orange)) ciopts(lcolor(black) lw(thin)) ///
        diam1opts(lc(edkblue)) diam2opts(lc(orange)) oline1opts(lp(dash) lc(edkblue)) oline2opts(lp(dash) lc(orange)) graphregion(color(white))
    The only thing missing is the second vertical line (the orange line corresponding to PD-L1 negative). Unfortunately, there is currently a bug in the forestplot command which prevents this from displaying. We would like to add the option dataid(PDL1), but as you would see if you tried it, it gives an error message. I have corrected the error in a beta version, and can confirm that the line does appear once the bug is fixed (see attachment). I shall release a new version of the metan package as soon as possible -- sorry about that!


    I hope that this gives you some useful pointers.

    Best wishes,

    David.



    Click image for larger version

Name:	PD-L1.png
Views:	1
Size:	81.7 KB
ID:	1636796
    Last edited by David Fisher; 16 Nov 2021, 10:03.

    Comment


    • #32
      Hi again,

      Just to say that it occurred to me that the code I used above (for the early data manipulation) could actually be simplified quite a lot! So, here is some alternative code. The ultimate destination, and use of metan and forestplot commands, are identical in both cases.

      Data entry (same as above):
      Code:
      * Example generated by -dataex-. For more info, type help dataex
      clear
      input str26 Disease str23 StudyName str25 Intervention str11 Control int(IntN ControlN) float(HR_pos HRlci_pos HRuci_pos HR_neg HRlci_neg HRuci_neg)
      "Non-small-cell lung cancer" "CheckMate 057, 2017"     "Nivolumab"                 "Docetaxel"   231 224 .43  .3  .63 1.01 .76 1.33
      "Non-small-cell lung cancer" "CheckMate 017, 2017"     "Nivolumab"                 "Docetaxel"   117 108 .53 .31  .89   .7 .47 1.02
      "Non-small-cell lung cancer" "POPLAR, 2016"            "Atezolizumab"              "Docetaxel"   144 143 .54 .33  .89  .84 .53 1.32
      "Non-small-cell lung cancer" "OAK, 2016"               "Atezolizumab"              "Docetaxel"   607 608 .64 .49  .84  .85 .73    1
      "Melanoma"                   "CheckMate 067, 2018 (a)" "Nivolumab"                 "Ipilimumab"  288 277 .65 .42  .99  .64  .5  .82
      "Melanoma"                   "CheckMate 067, 2018 (b)" "Nivolumab plus ipilimumab" "Ipilimumab"  278 277 .56 .35   .9  .54 .42  .69
      "Melanoma"                   "CheckMate 069, 2016"     "Nivolumab plus ipilimumab" "Ipilimumab"   35  83 .78 .26  2.4  .74 .38 1.47
      "Melanoma"                   "CheckMate 066, 2018"     "Nivolumab"                 "Dacarbazine" 120 243 .16 .07  .38  .56 .37  .85
      "Melanoma"                   "CheckMate 037, 2017"     "Nivolumab"                 "CTx"         201 195 .73 .49 1.09 1.15 .82 1.62
      "Head and neck cancer"       "CheckMate 141, 2018"     "Nivolumab"                 "SOC"         172 103  .5  .3  .83  .81 .55 1.21
      end

      Setup prior to meta-analysis:
      Code:
      gen double lnHR_pos = ln(HR_pos)
      gen double lnHRlci_pos = ln(HRlci_pos)
      gen double lnHRuci_pos = ln(HRuci_pos)
      
      gen double lnHR_neg = ln(HR_neg)
      gen double lnHRlci_neg = ln(HRlci_neg)
      gen double lnHRuci_neg = ln(HRuci_neg)
      
      gen IntString = Intervention + " (n=" + strofreal(IntN, "%3.0f") + ")"
      label variable IntString `""Intervention " "(No. of patients analysed)""'
      format %-1s IntString
      
      gen ControlString = Control + " (n=" + strofreal(ControlN, "%3.0f") + ")"
      label variable ControlString `""Control" "(No. of patients analysed)""'
      format %-1s ControlString

      As before, perform two meta-analyses, for PD-L1 positive and negative separately; then append and re-sort:
      Code:
      preserve
          tempfile neg
          metan lnHR_neg lnHRlci_neg lnHRuci_neg, random study(StudyName) by(Disease) nosubgroup nograph clear
          save `neg'
      restore
      metan lnHR_pos lnHRlci_pos lnHRuci_pos, random study(StudyName) by(Disease) lcols(IntString ControlString) nosubgroup nograph clear
      append using `neg', gen(PDL1)
      
      isid _BY _USE _STUDY PDL1, sort miss

      A little more data manipulation prior to running forestplot:
      Code:
      replace PDL1 = PDL1 + 1
      label define PDL1_ 1 "PD-L1 Positive" 2 "PD-L1 Negative"
      label values PDL1 PDL1_
      
      drop if inlist(_USE, 0, 6) & PDL1==1
      replace PDL1 = . if inlist(_USE, 0, 6)
      
      replace _LABELS = "Pooled estimate in PD-L1 positive at 5% cut-off" if _USE==5 & PDL1==1
      replace _LABELS = "Pooled estimate in PD-L1 negative at 5% cut-off" if _USE==5 & PDL1==2
      
      replace _LABELS = "" if _USE!=5 & PDL1==2
      label variable _LABELS "Study name"
      format %-1s _LABELS

      And finally, creation of the plot (same as before):
      Code:
      forestplot, hr nowt plotid(PDL1) lcols(IntString ControlString) xlabel(.5 "0.5" 1 2) range(.0625 2.5) cirange(.08 2.5) ///
          astext(70) textsize(90) boxsca(80) favours(Favours intervention # Favours control, fp(3)) ///
          box1opts(mcolor(edkblue)) box2opts(mcolor(orange)) ciopts(lcolor(black) lw(thin)) ///
          diam1opts(lc(edkblue)) diam2opts(lc(orange)) oline1opts(lp(dash) lc(edkblue)) oline2opts(lp(dash) lc(orange)) graphregion(color(white))

      Thanks,

      David.

      Comment


      • #33
        Hi David, that works great, thanks!

        Regards

        Feng Li

        Comment


        • #34
          Hi David,

          With the below dataset

          Code:
          * Example generated by -dataex-. To install: ssc install dataex
          clear
          input str15 study int(n1 c1 nc1 n2 c2 nc2) float rank byte(qi qiplus1)
          "Sandberg2000"      40  19   21   41  13   28        1 3 4
          "Clapp1989"         56   0   56   59   5   54 .6666667 2 3
          "Bussel1990"        61  20   41   65  23   42 .6666667 2 3
          "Fanaroff1994"    1204 186 1018 1212 209 1003 .6666667 2 3
          "Haque1986"        100   4   96   50   5   45 .3333333 1 2
          "Weisman1994a"     372  40  332  381  39  342 .3333333 1 2
          "Chirico1987"       43   2   41   43   8   35 .3333333 1 2
          "Conway1990"        34   8   26   32  14   18 .3333333 1 2
          "Ratrisawadi1991"   68  10   58   34  13   21        0 0 1
          "Tanzer1997"        40   3   37   40   8   32        0 0 1
          end
          And the following code with ranks (score/maxscore) gives the correct result

          Code:
          metan c1 nc1 c2 nc2 , or model(qe,qwt(rank))  study( study )  forestplot(astext(85) textsize(100) boxscale(35) spacing(1) leftjustify range(0.3 2) dp(2))  extraline(yes) hetinfo(q pvalue tausq)

          However this code using raw scores gives an error

          Code:
          metan c1 nc1 c2 nc2 , or model(qe,qwt(qi) \ re )  study( study )  forestplot(astext(85) textsize(100) boxscale(35) spacing(1) leftjustify range(0.3 2) dp(2))  extraline(yes) hetinfo(q pvalue tausq)
          I assumed that metan automatically converted all qwt to ranks by dividing by maximum in the list as even if this is done with a rank input by a user instead of a score there would be no issues as all ranks always start from 1. If this is not the case can this behavior please be updated in the next metan update as may otherwise cause errors for some users? Am I missing something?

          Regards
          Suhail

          Regards
          Suhail Doi

          Comment


          • #35
            Hi Suhail Doi

            Having taken a look at your example, I think this is to do with rounding error. With reference to Appendix A of your 2015 article (dx.doi.org/10.1016/j.cct.2015.05.010), we see that tau_hat_j is defined as a function (let's call it "f") of the inverse-variance weights and quality weights, minus the original tau_j. When applied to the last two observations in your example dataset, that function "f" works out to be equal to tau_j, so that the difference tau_hat_j is equal to zero. But in the presence of rounding error, Stata is calculating this to be a very very small negative quantity -- hence the error message "negative weights encountered".

            It can easily be shown (and as you may already be aware) that this happens exactly when the "quality weight" is zero -- as it is for the last two observations of your example dataset. This simplifies the expression for the adjusted/corrected quality weight, leading to other terms cancelling out in the expression for tau_hat_j.

            So, I should be able to fix this, either by detecting when the quality weight is zero, and/or by being cleverer with the way I arrange the mathematical expressions so that Stata evaluates tau_hat_j as exactly zero rather than a (very) small negative number. In the meantime: is it possible to run the same analysis but with a different set of quality weights which do not take on zero values?

            Thanks,

            David.


            P.S. I am a little surprised that your code ran using the ranks (score/maxscore). On my machine, both your commands failed, and for the same reason. I suppose it's possible that Stata successfully evaluated tau_hat_j as zero on your machine using ranks but not using raw scores. If anything, I might have expected the opposite to be true, because the raw scores are whole numbers and hence less at risk of rounding error. Could you check this again, and confirm your Stata and metan version numbers, and the error messages and codes that you are seeing? Thanks!
            Last edited by David Fisher; 09 Dec 2021, 03:59.

            Comment


            • #36
              Hi David,

              Thanks for the update. I just did the analysis using the "qiplus1" column in the dataset above and there is no error. I checked both ways using the rank (qiplus1 divided by 4) as well as the raw qiplus1 count of safeguards variable. Both give the same result and no errors so your observation seems correct. I also checked other datasets and this only happens if there is a zero qwt for one or more studies.

              Just to be clear - are all user entered qwt divided by the maximum value prior to processing in metan or do you check the maximum value and only do this if the qwt maximum value is not 1? Either way would work but perhaps just doing this in all cases would be better.


              Regards
              Suhail

              Regards
              Suhail Doi

              Comment


              • #37
                Hi Suhail,

                Great, thanks -- it sounds like we've solved this one and I just need to roll out a fix.

                In answer to your question, user-specified quality weights are always divided by the maximum value, to rescale to 0-1, prior to processing. It's actually only just occurred to me, but in Appendix A of your 2015 article it states that a correction for negative weights is necessary if there exist any Q_j < 1. But in my metan implementation, the Q_j are always re-scaled, as I say ... and therefore the "correction" to Q_dash_j is always done. Is that how you anticipated it?

                Thanks,
                David.

                Comment


                • #38
                  Hi David,

                  Yes, I just checked all the calculations and negative weight is not possible if the Q_dash_j is applied so must be a rounding issue. And also what you have implemented is correct, Q_dash_j should always be implemented after the re-scaling (dividing by the maximum) of the qwt entered by the user regardless of if all the qwt are 1 or not. If they are 1, then Q_dash_j also defaults to 1 so there will be no issue at all.

                  PS: Interesting observation and I know why it failed on your computer - try creating the rank yourself by running:
                  Code:
                  gen ranknew = qi/3
                  Now run:
                  Code:
                  metan c1 nc1 c2 nc2 , or model(qe,qwt(ranknew))  study( study )  forestplot(astext(85) textsize(100) boxscale(35) spacing(1) leftjustify range(0.3 2) dp(2))  extraline(yes) hetinfo(q pvalue tausq)
                  and voilà .... no error message
                  This confirms the rounding issue
                  Regards
                  Suhail Doi

                  Comment


                  • #39
                    Hi David,

                    After giving this some more thought, I am not sure if this can be due to rounding because the sum of weights remains constant before and after application of qwt so perhaps the next thing to check is for a bug at the step where conversion of the user input to a rank takes place?

                    Regards
                    Suhail
                    Regards
                    Suhail Doi

                    Comment


                    • #40
                      Hi Suhail,

                      Could you clarify what you mean by "sum of weights remains constant before and after application of qwt" ? Do you mean, in the metan output? If so, could you give a working example? Or do you mean theoretically -- in which case could you point to the relevant section of your article?


                      On a related note, I did a bit of algebra with the aim of correcting the rounding-error issue, and it works out (I think) that:

                      tau_hat_j = Q_j * { sum[ (1-Q_j)/v_j ] / sum[Q_j] }

                      so that:

                      w_doubledash_j = (Q_j/v_j) + tau_hat_j = Q_j { w_j + sum[ (1-Q_j)/v_j ] / sum[Q_j] }

                      If correct, this will greatly simplify calculations and hopefully solve the rounding-error issue.


                      Thanks,

                      David.

                      Comment


                      • #41
                        Hi David,

                        What I meant by sum of weights remains constant is that:

                        sum[1/v_j] = sum[w_hat_doubledash_j]
                        (Note that the hatted weights are not normalized weights)

                        Regarding the second point, thanks for the simplification. The first one is correct:

                        tau_hat_j = Q_j * { sum[ (1-Q_j)/v_j ] / sum[Q_j] }

                        The second one that follows should be updated to:

                        w_hat_doubledash_j = (Q_j/v_j) + tau_hat_j = Q_j { (1/v_j) + sum[ (1-Q_j)/v_j ] / sum[Q_j] }

                        (In our paper w_j actually referred to the normalized RE weight and not 1/v_j)


                        Regards
                        Suhail

                        Last edited by Suhail Doi; 10 Dec 2021, 15:25.
                        Regards
                        Suhail Doi

                        Comment


                        • #42
                          Hi Suhail,

                          Agree with your correction to my expression for w_hat_doubledash.

                          We then see that: sum(w_hat_doubledash_j) = sum[ Q_j/v_j ] + sum[ tau_hat_j ] = sum[ Q_j/v_j ] + sum[ (1-Q_j)/v_j ] = sum[1/v_j] as you say ("sum of weights remains constant").

                          However, I still see no reason to doubt my original diagnosis. My code as currently written does not formally check that the sum of weights has remained constant. We both know that it is algebraically true; but due to computer precision/rounding (see e.g. Bill Gould's presentation, in particular Slide 37: https://www.stata.com/meeting/boston...on14_gould.pdf) it may not be true according to Stata's computations (at least, not in its currently-coded form, which involves subtraction of two near-identical quantities).

                          The current code can be seen by opening "metan.ado" in a text editor such as Notepad++ and jumping to line 9265 ... but I admit that the more I look at it, the clunkier it appears, and I am a little embarrassed by it!!

                          Best wishes,

                          David.


                          P.S. Sorry, I should say: of course if you still have concerns please formulate them as a worked example and I will look into it!

                          Comment


                          • #43
                            Hi David,

                            Thanks. Looks like all the computations are working as intended so must be a rounding problem. Best then to add a line that replaces w_hat_doubledash_j to 0 if it is <0 at the point where it is computed

                            This should solve this and given that it will be a very very small negative number should create no issues

                            Regards
                            Suhail
                            Regards
                            Suhail Doi

                            Comment


                            • #44
                              Hi David,

                              By adjusting qi downwards I can confirm that the negative value encountered is between -0.0000000000000001 (fifteen zeroes) and -0.00000000000000001 (sixteen zeroes)

                              Therefore it confirms your diagnosis and all that needs to be done is what I suggested in the previous post

                              Regards
                              Suhail
                              Regards
                              Suhail Doi

                              Comment


                              • #45
                                Hi David,

                                I notice with the new update to metan, zero quality entries for the QE model now always results in the error message "negative weights encountered". This is a Stata issue as discussed above but perhaps its good in a way given that we do not really want people to enter zero and lose the study information.

                                However, this will confuse users so I suggest this error message be changed to "zero entries for quality information not allowed" as that is the only way Stata can generate a negative weight. We actually do not recommend zero entries in this paper [1] and suggest adding 1 to all counts of safeguards if zero is encountered. Ultimately, whoever uses the QE model needs to have read up so i am not too concerned about giving more information other than this new message.

                                Also, in the help file under model specs the information for qwt is "variable containing Quality Effect weights" and I recommend this be updated to "variable containing study quality information" as the weights are generated by the model.

                                Regards
                                Suhail

                                Reference
                                1. Stone JC, Gurunathan U, Aromataris E, Glass K, Tugwell P, Munn Z, Doi SAR. Bias Assessment in Outcomes Research: The Role of Relative Versus Absolute Approaches. Value Health. 2021 Aug;24(8):1145-1149
                                Regards
                                Suhail Doi

                                Comment

                                Working...
                                X