Announcement

Collapse
No announcement yet.
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Interpreting standardized mortality rates

    I am trying to interpret the impact of controlling for patient characteristics (Level 1) on hospitals' (Level 2) mortality rates (% of patients who died). It's a two-level logistic random intercept model.

    I'm interested in hospitals' mortality rates before and after controlling for patient characteristics, so my focus is on the Level 2 units.

    If I compare the hospitals' Empirical Bayes "shrunken" intercepts from a null (intercept only) model to a model that controls for level 1 covariates, some hospitals' intercepts went up (suggesting higher mortality risk), and some went down (lower mortality risk). Makes sense. I can also use these EB intercepts and CIs to see which hospitals are significantly different from the "average" hospital, whose intercept is 0.

    But I can also use predicted values from the model to calculate the more common Standardized Mortality Rate (SMR) for each hospital. This is akin to observed / expected * the national observed mortality rate (although I'm using predicted based on a hospital-specific intercept / predicted based on a common intercept) * national observed rate. This puts each hospital's "risk-adjusted" performance on the same scale as the observed mortality rate (i.e., a %), which is a useful heuristic. I can also calculate CIs around the SRMs to see which hospitals are significantly different from the national observed rate.

    However, on datasets with large Ns (500,000) and rare events, like mortality, the SMRs behave oddly: almost all of them are higher than the hospitals' observed rate. This would lead one to conclude that almost every hospital had higher mortality after adjusting on patient characteristics, even for hospitals whose EB intercepts went down (i.e., which would suggest lower mortality). So although it's appropriate to compare each hospital's SMRs to the national mortality rate, comparing them to the state's observed rate would suggest that every hospital got worse after risk-adjustment, which isn't the case. This may mean not communicating the SMRs, which is an unfortunate loss due to their straightforward interpretation.

    This happens only on datasets with large Ns and rare events. The SMRs behave well for smaller Ns and more common events.

    SMRs are a form of indirect standardization, and I've read that this process sometimes produces paradoxical results, but people don't seem to know why. I understand this may be due to Jensen's inequality. Or, more simply, the risk of mortality (a rare outcome) for the average patient is not the same as the average risk of the rare outcome. Perhaps I'm encountering this phenomenon. See for example Julious SA, George S. Are hospital league tables calculated correctly? Public Health 2007; 121: 902–4.

    Despite the issue, a ranking based on states EB intercepts is identical to a ranking based on their SMRs, which confirms that although the SMRs all go up from observed, they do so proportionately. Also, the hospitals significantly above and below the average are identical whether I use the EB intercepts with CIs or the SMRs with CIs.

    Any insights to this fun little puzzle?

    Kurt

  • #2
    Thanks to some helpful colleagues, I have an explanation for the phenomenon I described.

    The problem, which isn't really a problem, is *not* related to "datasets with large Ns and rare events." Instead, SMRs will increase from observed rates for most or all hospitals if the largest hospitals had higher rates than the other hospitals (red flags would be a skewed distribution of rates and a high correlation between Level 2 sample sizes and observed rates). As my colleague pointed out, since the national rate is the 'weighted mean' of the hospital observed means, the SMR will be higher than most hospital observed means if the larger hospitals have higher rates." It's a simply artifact of using the national rate as the multiplier. This doesn't suggest all hospitals got better after risk adjustment, because SMRs should not be compared to observed rates anyway, only to other SMRs and the national rate. If some symmetry around a mean is desired (with some hospitals SMRs going up from observed and some going down), one could instead multiply the SMR with the unweighted mean of observed hospital rates or possibly the observed rate for the hospital whose random effect is closest to zero.

    Comment

    Working...
    X