Announcement

Collapse
No announcement yet.
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • struggling with metan commands

    Hello.
    I am trying to run a random effects meta analysis after doing a systematic review. OR and 95CIs are taken directly from included studies. I have tried the metan command and have a few questions I am hoping for help with.




    Question 1: I have 15 studies which report an odds ratio between Drug A users vs non users. have tried metan or uci lci, random. Also have tried metan lnor lnuci lnlci, eform effect(OR).
    This gives me a good meta-analysis which seems accurate, however, the sizes of the study weights dont really make sense, as one study has 500,000 people, and another study has 50 people but the size of the squares looks identical. The data I have in my Data Editor is essentially just author name OR, Upper CI, Lower CI for each study. I don't know where to put the individual N for each study, as I believe that is important and isnt being considered with the command I have written.

    It generates a forest plot that doesnt include the names of the study authors. Can somebody please also provide me with a sample for what a metan command line should look like where I can nicely include the author names and label the X axis easily? (I cant really understand the help metan page)


    Question 2: I am also trying to assess the mean difference overall between BP in Drug A users and non-users (from data in 15 studies. I have 15 studies which report mean difference between users and non-users. I have number of people in both groups (users, nonusers and BP in both groups, and the overall mean difference in BP). What data do I need to include in my Data Editor in order to run a meta-analysis?

    Question 3: I am trying to assess the mean change in BP between people before starting Drug A and then after studying Drug A. Again, I have data from 15 studies. I have N, BP baseline, BP at study end, mean change in BP. What data do I need to put in the Data Editor to run the meta-analysis

    I've been struggling with this for a few weeks; I think it's a pretty straightforward issue but i'm just having trouble formatting the command line (and I dont know exactly what data needs to already be in the system before I run the analysis). Truly any appreciation is greatly helpful!

    Thanks!!

  • #2
    Hi Alan,

    I realise that some time has passed since you posted this, and I'm sorry that you've not had any response. If my response here is too late for you, hopefully it may be of use to other readers.

    Firstly, could I draw your attention to the fact that metan was updated earlier this week to v4.0. You can update by typing ado update metan, update at the Stata command line. In what follows, I will be using the updated version.

    Question 1: Your use of "lnor lnlci lnuci" is correct, rather than "or lci uci". By default, metan uses inverse-variance weighting, both to calculate the pooled effect, and to generate the squares in the forest plot. The relative weights are calculated from the confidence intervals; it is assumed that the confidence intervals reflect the relative sizes of the studies (that is, larger studies have tighter intervals and vice-versa). However, the random-effects model adds a constant of heterogeneity (commonly known as "tau-squared") to the observed variance of each study. This can have the effect of making the relative weights across studies more similar than they would naturally be. So, possibly, this is the effect which you are seeing. You can pass the study sizes (individual N) to the npts(varname) option, and this will display the N's in a column of the forest plot ... although note that this will not change the weighting or the boxes!

    Question 2: If you have the mean difference and its standard error for each study, then you can simply use metan meandiff semeandiff, npts(studyN) (where you'd need to replace my variable names with your own!) Alternatively, if you know the mean, standard deviation and N, separately for drug users and for non drug users, for each study, then you can let metan calculate the mean difference itself, using metan N_users mean_users sd_users N_non_users mean_non_users sd_non_users (where again you'd need to replace my variable names with your own). There are options for exactly how to make the mean-difference calculation; see help metan_continuous.

    Question 3: I think this is basically a repeat of Question 2, but with different measurements. The principles should be the same.


    I hope this is of some use!

    Best wishes,

    David.

    Comment


    • #3
      David, thanks, as always you are so helpful. We are experiencing the same issue as question 1. We have 3 studies for one of our analyses. Study A has 360 patients and is weighted 34%, Study B has 518 patients and is weighted 29%, and Study C has 3900 patients and is only weighted 37%. The I2 is 93% so one could argue quite reasonably not to do or report a summary effect measure. But...on its face, it just makes no sense to me to weight these very disparate studies so similarly. Is this just a consequence of small number of studies and high heterogeneity?

      Comment


      • #4
        Hi Mark,

        Sorry I have been really busy and so haven't been keeping proper tabs on these forums! Hopefully my answer is still of some use.

        Basically, yes, it is a consequence of small number of studies and high heterogeneity. More specifically, it is a direct consequence of the (standard) random-effects model; in which a fixed value (estimated from the data), representing the heterogeneity variance, is added to the observed random error variances from each of the studies. If the heterogeneity variance is large relative to the error variance, then the study weighting becomes driven by the heterogeneity variance, and hence the weights become more similar to each other. Of course, this also means that the pooled variance will be large (wide confidence interval), representing the uncertainty implied by the large heterogeneity. So that's OK. An additional consequence is that the (pooled) point estimate may be more consistent with the smaller studies than with the larger. This may not matter if the confidence interval is wide; but this, and the weighting issue, is indeed a criticism that is sometimes made of the standard random-effects model.

        Ultimately, with a limited number of studies and a large heterogeneity variance, a meta-analysis will inevitably be somewhat compromised; the onus is on the analyst to interpret the results accordingly.

        Best wishes,
        David.

        Comment

        Working...
        X