The national quality agenda to improve American health care is moving too slowly. While there are a few bright spots, much of the health care system improvement is halting and uneven. Where we have seen real progress has usually involved quality measurement and accountability based on evidence and science. Yet much of the field remains awash with measures and approaches that are unscientific, poorly constructed, and therefore—not surprisingly—unhelpful. One organization, National Quality Forum (NQF), stands out as central to getting the national quality agenda back on track. It is time for Congress to reauthorize federal funding for NQF.
NQF was founded in 1999 on the recommendation of President Clinton’s Advisory Commission on Consumer Protection and Quality in the Healthcare Industry “to promote and ensure patient protections and healthcare quality through measurement and public reporting.” As a nonprofit membership organization led by a 10-person executive team, NQF reviews and endorses quality measures for use in private- and public-sector health care. Over the past two decades, NQF has developed a reputation of objective, scientifically-based analysis of quality measurement. Its funding is primarily through public dollars (nearly 70%), with private grants and membership dues making up the other 30%.
Health care providers now need NQF to provide some vital direction on the excess of quality measures currently in use. As a result of government agencies, specialty organizations, and for-profit entities taking up the call for quality measurement, a typical hospital in the U.S. collects and reports on dozens of different measurement schemes to evaluate its quality of care.
Multiple measurers are not necessarily a bad thing; given the complexity of assessing what constitutes a good hospital, one could imagine different organizations emphasizing different areas—some might focus on patient safety, others on effectiveness or patient experience. But emerging evidence shows little correlation between how organizations fare across different measurement schemes. The reason is not that measurers are simply focusing on different aspects of quality. Instead, the real problem is large variation in the scientific rigor with which measurement organizations approach the assessment of quality.
Poor quality measures have consequences. When the field is littered with scientifically unsound measures—and most hospitals end up with high ratings under at least one measurement scheme—it becomes difficult for consumers to choose, for payers to push for accountability, and for provider organizations to be motivated to improve. Examples of scientifically unsound rating schemes abound. In 2013, for instance, Consumer Reports published ratings for surgical care, which ranked hospitals by rates of mortality and complications across 27 common, scheduled surgeries. The study calculated that the best hospitals (those with the lowest mortality and complications) in Massachusetts were Carney Hospital in Boston and Cooley Dickinson Hospital in Northampton, and that Massachusetts General Hospital (MGH) and Brigham and Women’s Hospital—two of the largest and most touted hospitals in the country—ranked among the worst.
How did Consumer Reports (which has built a good reputation over the years for its consumer product ratings) arrive at such a ranking? The editors chose not to credit surgical outcomes to the hospitals where the surgery took place but instead, credited cases that were transferred to the receiving hospitals. This flawed approach penalized hospitals (typically academic medical centers) that receive transferred patients with complications, and rewarded hospitals (such as community hospitals) that transferred these complex patients. After years of criticism, Consumer Reports eventually changed its approach, but not before damage was done. The original evaluation gave many Massachusetts policymakers the impression that community hospitals were as good as or better than academic medical centers. There is little doubt that if NQF had evaluated this measure, the findings would have never been endorsed.
Methodologic problems occur not just in building individual measures but also when ratings organizations try to lump them together to create a summary score. Last year, the Centers for Medicare & Medicaid Services created an Overall Hospital Quality Star Rating system that has substantial methodologic shortcomings, including that hospitals reporting on fewer quality measures earn more stars. The way the star ratings are constructed tilts the field towards smaller, specialty centers that care for healthier patients and provide a narrow band of services, meaning that organizations offering care to the most complex patients do not rate as highly. Had the star rating system undergone a thorough vetting by NQF, these problems likely would have been detected.
The cost of scientifically unsound measures is high. Since measurement errors are often obvious only to experts, most consumers and journalists are not able to differentiate between good and bad measures. Furthermore, poor quality measures allow poor-performing organizations to escape scrutiny by pointing to high performance in some publicly available metric. Finally, poor quality measures hamper the ability of policymakers and payers to drive improvements through programs that link financial incentives to outcomes. Yet currently only 50% of federal measures are endorsed by NQF, and even fewer at the state level and in the private sector.
NQF was created with a simple goal: to ensure that our national quality strategy includes measurement through a rigorous, scientific approach. While reasonable people can disagree on which quality measures are most important—for example, is readmissions as important as mortality?—we need an approach that ensures everyone is at least measuring readmissions and mortality in the same way. Not only does NQF have the capability and the reputation to do this well, but the organization has received substantial investment over the past two decades to build this expertise. If NQF were to disappear tomorrow, we would simply need to re-create it. There is no other entity currently capable of taking on this role. This is why Congress needs to reauthorize the $30 million needed annually to support NQF’s mission to ensure that scientific principles drive approval of quality measures.
To improve health care, we need quality measures that are valid and reliable. The marketplace has responded by creating a plethora of measures, many of which are deeply flawed and possibly harmful. The public and private budgets bear the financial cost of unnecessary and harmful care, while patients continue to have little ability to know if or when they are getting the quality of care they deserve. Without NQF, policymakers will continue to struggle to ensure that we are rewarding high performers and pushing others to improve.