Hospital and physician rating systems, such as Medicare’s Hospital Compare, U.S. News and World Report’s “Best Hospitals,” and ProPublica’s Surgeon Scorecard, have been variously praised and pilloried by different stakeholders. Proponents of such rating systems argue that patients and families deserve information about the quality of care provided by their hospitals and physicians, even if imperfect. Opponents argue that the information contained in these rating systems is too inadequate or inaccurate to provide meaningful information, and thus should not be made public.
Who is right? On which side should health care leaders err?
Let’s take each set of arguments in turn.
There are many reasons we should all be in favor of rating systems. First and foremost, it must be admitted that not all doctors and all hospitals provide equally high quality care, and identifying the low and high performers is important. It seems hypocritical that doctors freely advise friends and family members about which doctors or hospitals to avoid, but do not want to “betray” their profession by making any of this information public. If a physician would not want his or her own mother to be cared for at a particular hospital, he or she should not want anyone else’s mother to be cared for by that hospital either.
At the low end of the extreme, there have been numerous recent high-profile court cases of physician malpractice in which patterns of terrible care persisted for many years, causing injuries and unnecessary deaths, before action was finally taken (see, for example, the case of Farid Fata, an oncologist who gave dangerous chemotherapy to patients who did not have cancer, or Aria Sabit, a surgeon who performed dozens of unnecessary back surgeries).
More commonly, albeit less dramatically, some hospitals fail, year upon year, to meet standards on basic metrics such as giving patients aspirin for heart attacks, antibiotics for pneumonia, or blood clot prevention medications after surgery. Hundreds of health services research papers have been written exploring these variations in performance on quality metrics, yet variations persist. Clearly, there is a role for a systematic, user-friendly way to provide patients with information (ideally based on data, not on physician anecdotes, which are subject to bias) about where to seek care. If nothing else, the provider community ought to be able to agree that everyone deserves the opportunity to avoid truly low-quality providers.
It is also important to identify the providers on the opposite end of the spectrum: truly exceptional hospitals with near-foolproof systems of care, or clinicians with exceptional surgical skills, diagnostic acumen, or bedside manner. Clinicians (and patients) may know which hospitals and doctors have good reputations, but the available information doesn’t help patients identify those that truly excel, either. That said, the majority of physicians and hospitals likely fall somewhere in the middle: neither unacceptable nor exceptional, but competent, caring, and trustworthy. Another important role for rating systems is making these clinicians and hospitals visible, so that patients and families can seek care from the highest-quality providers available.
There are also many reasons we should have concerns about rating systems. Chief among these is that our current rating methodologies are so woefully inadequate that not only do they fail to accurately identify high- and low-quality providers, but they may also falsely accuse good providers of being bad ones. As a result, patients who choose physicians and hospitals based on these information sources are likely not improving their chances to receive high-quality care. They may even be unintentionally avoiding good providers.
The biggest driver of inadequate rating systems is inadequate risk adjustment. When risk adjustment is inadequate, data may show that physicians and hospitals have inappropriately high mortality rates when in reality they are simply serving a particularly sick patient population. They may even be the most highly skilled providers, willing to care for the sickest of the sick. For patients who need these types of highly trained clinicians, a failure of something as seemingly banal as risk adjustment could have terrible consequences if it falsely steers them away from high-quality providers.
Other issues with statistical adjustment can negatively impact rating systems, too. Providers that serve low volumes of patients are often reported by current statistical models as being average, even when their performance is poor year upon year. In this scenario, patients may be falsely reassured that a clinician is average when in fact he or she is more likely sub-par. For both reasons, there is some merit to the argument that current ratings systems are so flawed that they should not be published.
So, which is the lesser evil or the greater sin? To report or not to report? To pretend we know little when in fact we know much, or to pretend we know much when in fact we know little?
I believe we know enough, and we have an obligation to our profession and patients to thoughtfully report our performance to the public. We should publicly report performance data when we can, at the same time as we note where limitations preclude us from doing so. We should report measures for which we think risk adjustment is adequate, or measures where risk adjustment is not necessary. We should convey information quickly when we think a provider is unsafe, while allowing providers labeled as such to have an appeals mechanism. And we should identify high performers, learn from them, and spread their best practices.
Finally, health care leaders should continue to push the envelope on our ability to accurately measure quality. The science of quality measurement has made too few leaps forward in the past decade. While innovation abounds in drugs, devices, and wearable technology, it lags in the basic science of performance assessment. Claims data, broadly used for performance evaluation, lack clinical information crucial for risk adjustment. Risk models often perform little better than chance. Reporting frequently lags a year or more behind performance. We can, and should, do better. Ultimately, our patients have a right to make informed decisions on where and from whom to receive their care.