Health policy wonks and health care providers are in the midst of a raging debate over the Hospital Readmissions Reduction Program (HRRP), a component of the Affordable Care Act intended to reduce hospital readmissions. The research community is struggling with two key questions: whether the penalty has actually reduced readmissions, and whether it has affected the chances of a patient’s death.
First, some background. Historically, almost 20% of Medicare patients who left the hospital were readmitted within 30 days. That has long caught policymakers’ attention, in part because the relevant financial incentives were clearly problematic. Hospitals were paid every time a patient came back to the hospital, thereby actually rewarding hospitals that had lots of patients readmitted and implicitly punishing those who avoided unnecessary hospital stays. So early on in the Obama administration (where I was Director of the Office of Management and Budget), we included in the list of possible reforms a penalty to neutralize or even reverse these financial incentives. This provision survived in the final health reform legislation.
The Hospital Readmissions Reduction Program imposes financial penalties on hospitals with above-average readmission rates for three types of patients: those initially admitted for heart failure, a heart attack, or pneumonia. The rules were finalized in 2011 and 2012, and then implemented in October 2012.
Researchers generally agree that the penalties were potent enough to get hospitals to notice, and they have coincided with a reduction in readmissions. Readmission rates have fallen to about 16% — still too high, but lower than in the years before the new policy. Most previous research suggests the readmissions penalties were responsible for a substantial part of the readmission rate decline. A new paper in Health Affairs, however, suggests that connection is partly if not entirely a mirage. Instead, this research points to a change in the number of allowed diagnoses per claim in 2011 as triggering an increase in coded risk per patient that artificially reduced the risk-adjusted readmission rate.
Whatever the effects on readmission rates, the more heated part of the debate involves whether the incentives are so strong that they cause hospitals to avoid readmitting patients so fiercely that they died instead. After all, no one wants to spend more time in the hospital than necessary — but that’s clearly far preferable to dying.
A second set of researchers, writing in the New York Times and JAMA on the topic, darkly warns readers about precisely such a possibility, noting that HRRP “was associated with an increase in deaths within 30 days of discharge” and that the program may be responsible for 10,000 deaths.
Several other studies, however, including a Congressionally mandated report by the Medicare Payment Advisory Commission (MedPAC) and one by Atul Gupta at the University of Pennsylvania, among others, have found either no effect or a reduction in death rates from HRRP. For those who think statistics don’t matter, it doesn’t get more important than this: It’s literally a matter of life and death.
So which is it? Does the Hospital Readmissions Reduction Program kill people, have no effect, or save lives? The debate has been pursued vigorously on Twitter.
Considering the Evidence on Readmissions
Three points tilt the weight of the evidence toward either no effect or a beneficial one on mortality.
First, as MedPAC argues, death rates should be measured from the point a patient is admitted to the hospital, rather than when they leave the hospital, in studying this question. Consider a very sick patient deciding to spend her final days at home or in the hospital. If she chose to go home or to a hospice, the death rate following discharge would increase — but there would be no effect on the mortality rate after being initially admitted to the hospital. When this measure is used, the authors of the New York Times op-ed find little or no effect. (An alternative approach is to examine discharges excluding those to hospice, but mortality following admission is a more comprehensive approach to the potential biases.)
When this research team assessed mortality for 45 days after being admitted to the hospital, their results showed no significant change in mortality for any of the conditions covered after the policy was implemented in 2012 — and an increase only for heart failure during the period following enactment of the law in 2010 but before the policy took effect (more on this below). Another recent study that used a similar measure also found “no evidence for an increase in in-hospital or post-discharge mortality associated” with the Hospital Readmissions Reduction Program. Much of the debate is therefore resolved in favor of HRRP having no deleterious impact, if we measure mortality from the point of being admitted to the hospital rather than upon leaving it.
Second, even if we examine only post-discharge deaths, adjusting for the underlying trend in mortality rates is tricky. The op-ed authors did so by dividing the data into four periods: two before the law was enacted, one from enactment until the policy went into effect, and the final one thereafter. They then took the difference between the two periods before the law, used that as the “trend,” and assessed whether the changes became bigger or smaller, compared to that trend, in the two later periods. But by collapsing all the data into these four buckets, they also make the analysis very sensitive to where the boundaries are drawn.
A particular challenge involves the period from April 2010 to September 2012. The authors argue that it was during this period, commencing right after the law’s enactment, that hospitals may have anticipated the program’s implementation and started responding. But the dividing lines raise all sorts of questions. For example, the period from 2010 to 2012 was also one in which other important regulations changed, particularly involving audits about which patients should or should not be admitted, creating significant shifts in the populations being admitted to the hospital.
With sicker patients coming into the hospital, it’s also more likely that they’re closer to the end of their lives — and so mortality among those leaving the hospital would rise for reasons having nothing to do with the readmission policy. The op-ed team claims that they are able to adjust for such changes in the health status of the people entering hospitals, but those risk adjustments are crude and imperfect. (And indeed, the imperfection of the risk adjustments may be partially reflected in the fact that even after the adjustments are made, the mortality rate after leaving the hospital was increasing in their study before the legislation was enacted. If the risk adjustment were perfect and the mortality changes after discharge were driven only by the mix of patients being admitted, that shouldn’t happen except by chance — and then the trend would be an illusion.)
The starting point for the 2010–2012 period is also possibly both too late and too early. It is too late in the sense that the readmission provision was included in earlier, public versions of the legislation throughout 2009, so enactment of the legislation should have changed expectations in April 2010 only by whatever discount hospital executives applied to the final vote in Congress. It is too early in the sense that preliminary regulations to implement the policy were not even published until August 2011, so the period from April 2010 to August 2011 was one of substantial ambiguity about how the Hospital Readmissions Reduction Program would work.
The authors don’t test whether moving these boundaries changes the conclusions, but it wouldn’t be surprising if they did given the very limited number of periods into which the authors collapsed the data. Most of the other studies use a more sophisticated approach that doesn’t rely only on four buckets of time.
Third, and perhaps most importantly, the penalty does not apply to all hospitals, only those with high readmission rates. Hospitals with readmission rates much lower than average therefore have little fear that the penalty will apply to them. The op-ed team did not assess differences between hospitals subject to the penalty and those not, which is an important source of potential information about the program’s effects — and that’s precisely the approach Professor Gupta uses. His trenchant analysis effectively shows that hospitals at greater risk of having the penalty apply were more likely to see beneficial mortality trends compared to hospitals at lower risk of facing the penalty. Those differences lead him to conclude that the policy reduced death rates.
Some of these issues also apply to the recent Health Affairs article about readmission rates themselves. The key finding there is that the change in the number of allowed diagnostic codes in 2011 contaminated the trend in risk-adjusted readmission rates during the 2010–2012 period (what they call the “anticipation period”). The authors found less of an effect from their adjustments for coding changes after 2012, during the implementation period. They note that “only during the anticipation period were changes in risk-adjusted readmission rates very sensitive to risk-adjustment method.”
In private correspondence, the authors also confirmed that the readmission rates were statistically lower post-implementation than pre-enactment, though the trends between the conditions and hospitals affected by the penalties were not that different from those not affected. (Also note that it’s unlikely the policy is so impotent that it doesn’t even drive a reduction in readmissions but at the same time is so potent that it causes deaths from the non-existent reduction in readmissions. So if the Health Affairs article is correct, it raises further questions about the increase in mortality attributed to HRRP.)
In addition, if the prior trend in readmissions reflected changes in hospital practices and not patient mix, and if reducing readmissions is easier when the initial rate is higher and becomes harder as the rate continues to fall, using the prior trend as the benchmark for evaluating the HRRP would underweight its impact.
Lessons for Future Policy
So what to make of all this? I have five core conclusions:
- The debate is complicated and the New York Times op-ed was unduly alarmist.
- The literature’s use of the period between 2010 and 2012 to study the impact of this policy is problematic and badly needs sensitivity tests. Future research should explore how sensitive the results are to small changes in the boundaries of that anticipation period.
- The weight of the evidence to my reading still suggests some benefit from the policy in reducing readmissions and no harm in mortality, but that conclusion must be tentatively held for now.
- Across the research papers, the heart failure results seem most likely to be associated with potential adverse effects.
- Finally, any future changes to the readmission penalties (whether expanded to other conditions or curtailed for heart failure) should be rolled out first in some areas but not others. That differential rollout would allow a clearer test of the effects, as was the case with bundled payments. A commitment to a time-differentiated rollout of policy changes such as the readmission penalty would produce better-informed policy and fewer debates on Twitter — which seems like an attractive combination.