Care Redesign

Why Cleveland Clinic Shares Its Outcomes Data with the World

Article · February 29, 2016

Cleveland Clinic has a long history of measuring and reporting data on health outcomes, most famously in our Outcomes Books, the yearly reports on how patients treated in our different departments fare. We’ve realized that you cannot improve something if you don’t measure it and share what you find — so in that vein, I’d like to share some of our experiences in building this system at Cleveland Clinic.

We began tracking clinical outcomes for cardiac patients in 1979, and we have been using such data to facilitate accountability and learning since 1989. In 1998, we began publishing and distributing that data to referring physicians. In 2004, CEO Dr. Toby Cosgrove extended the expectation of measuring and publicly reporting outcomes to other clinical areas. This eventually produced what are now called the Cleveland Clinic Outcomes Books: 14 in all, published annually and publicly available online.

As the chair of the Outcomes Books editorial board, I work with my fellow board members to make the principle of transparency a practical reality and a worldwide source of learning. What we achieve at Cleveland Clinic is obviously not perfect, but allowing all comers to see what we do helps everyone, including us, get better. Indeed, the chief purpose of the Outcomes Books is to be a catalyst for quality improvement in patient care and outcomes. Annually reporting our results, whether good or bad, motivates us to improve them.

Our secondary purpose is to inform medical decision making — specifically, to communicate to a clinician what to expect when referring a patient to Cleveland Clinic for a particular condition. For example, our outcomes data for patients who undergo radical prostatectomy for clinically localized prostate cancer show that the risk for survival without biochemical relapse within five years is 94%. Notably, the purpose is simply to inform the clinical decision maker, not to drive referrals to Cleveland Clinic.

In fact, we obligate ourselves to report all useful outcomes that we measure, regardless of how we look relative to our peers and regardless of what we are required to publicly report. Sometimes clinicians are surprised by how good we are — or startled that we do not perform better than we do. When possible, we compare ourselves with recognized benchmarks or simply with ourselves over time. Even if we cannot observe a meaningful trend or benchmark comparison, we report our outcomes anyway.

How do we guard against cherry-picking outcomes? Each book is reviewed by the Outcomes Books editorial board — a group of 18 volunteers, primarily physicians representing 14 clinical institutes plus a few statisticians. This group helps to identify missing outcomes that should be reported, as well as outcomes that are unclearly presented or poorly measured.

We have been annually producing our Outcomes Books for more than a decade, but we certainly have room for improvement. Sometimes, for particular treatment–condition combinations, we have only volume or process measures. In those cases, we report what we have, with an eye toward better measurement next time. In consultation with Cleveland Clinic’s Quantitative Health Sciences Department, we always look for better data sources and methods of analysis.

Nonetheless, it is challenging to accurately measure all outcomes that are of interest to clinicians. Ideally, data for many measures should come directly from the patient (for example, the severity of hip-related pain one year after a hip-replacement operation). But if patients do not return to Cleveland Clinic for follow-up care, getting that information is not easy.

We do our utmost to grow the number of reported outcomes by challenging the 14 institutes to measure more outcomes as best they can — and to document their progress in a yearly improvement report. Many of the institutes have taken up that challenge.

For health care institutions that want to emulate our outcomes reporting, here are some suggestions:

  1. Identify the target audience, because the audience shapes the reporting. We have chosen peer physicians, but one could argue the case for patients, employers, or even commercial insurers as target audiences. Whatever audience you choose, make the choice early — it will greatly affect how you present the data, the language you use, and the look and feel of your published products.
  2. Recognize that most measures of substantial interest are the long-term ones, so they are likely to take years to collect properly. Fortunately, we were a very early adopter of an electronic health record system, which has greatly facilitated some of our measurement. However, organizations just starting down this road may have a very limited number of outcomes available for reporting, which could be discouraging, although specialties that report to national registries can begin with those data. Wherever you are now in this process, have hope: The many pages of measures that Cleveland Clinic reports across our 14 books has grown considerably, even though we started relatively small.
  3. Accept that reporting outcomes requires resources, and plan to fund and support your effort. Data collection, preparation, analysis, and reporting all take time and effort from many people. If the top leadership of your organization supports the effort to report outcomes, it is much more likely to be sustainable.

I recognize that this advice may not work for every institution, given the wide variation in size, location, clinical population, resources, and so on. My modest hope is that our work at Cleveland Clinic will help you tailor your own outcomes-reporting program so that it serves you and, ultimately, the patients cared for at your institution. In an ideal world, a fully informed consumer and his or her physician could compare expected health outcomes and costs — the total value of care — across all of the institutions where such care is provided.

New Call for Submissions ­to NEJM Catalyst

Connect

A weekly email newsletter featuring the latest actionable ideas and practical innovations from NEJM Catalyst.

Learn More »

More From Care Redesign
Griffin Myers Head Shot Oak Street Health

Rebuilding Health Care as It Should Be: Personal, Equitable, and Accountable

How Oak Street Health, a full-risk, value-based primary care model and social determinants practice, embeds itself in the community to keep its elderly population, “happy, healthy, and out of the hospital.”

Telemedicine Online Care Group Cleveland Clinic Express Care Online Quality Improvement - Next Steps

Ensuring Clinical Quality in Telemedicine

How Cleveland Clinic and Online Care Group designed a data-driven quality improvement telemedicine ecosystem.

Endless Forms Most Beautiful - Evolving Toward Higher-Value Care Cover Image

Endless Forms Most Beautiful: Evolving Toward Higher-Value Care

How Providence St. Joseph Health devised a Value-Oriented Architecture to guide physician practice.

Standard Treatment Algorithm for Patients with hepatitis C virus HCV at Geisinger Clinic and for Improved Sustained Virologic Response Rates in Geisinger ProvenCare Model

Enhanced Cure Rates for HCV: Geisinger’s Approach

To reduce the burden of hepatitis C in central Pennsylvania, Geisinger Clinic designed a comprehensive assessment and treatment protocol to improve sustained virologic response rates.

The HCD Human-Centered Design Kaiser Permanente Compass

Human-Centered Design and Performance Improvement: Better Together

Kaiser Permanente’s initiatives with combining HCD and PI point to the potential to improve the health care experience and outcomes of patients and providers.

Choi01_pullquote customizable electronic health records

Subscribing to Your Patients — Reimagining the Future of Electronic Health Records

Rather than searching EHRs to check on patients, what would it be like to instead subscribe to Ms. Jones in room 328?

Consensus That Design Thinking Is Useful in Health Care

Survey Snapshot: Design Thinking Is Useful, So Why Aren’t More People Using It?

NEJM Catalyst Insights Council members agree that design thinking is useful, but leadership buy-in and understanding of how to implement it may create barriers.

Artificial Intelligence AI Technologies - 5 Approaches to Augmentation of Health Care Decision-Making

Artificial Intelligence and the Augmentation of Health Care Decision-Making

Artificial intelligence is most likely to improve health care by augmenting the work of human clinicians.

Improving Scores Through Human-Centered Design

Designing and Implementing Better Patient Experiences

An ophthalmology provider with facilities in the greater Mexico City area is using Human-Centered Design to improve both the patient and staff experience.

Joseph04_pullquote_hospital_wellness

Designing Hospitals That Heal as Well as Treat

Hospitals must provide wholesome food, sound sleeping conditions, and human connection to promote healing and wellness.

Connect

A weekly email newsletter featuring the latest actionable ideas and practical innovations from NEJM Catalyst.

Learn More »

Topics

Social Needs

81 Articles

Addiction Is a Community Disease

Partnering with the community to tackle substance use and mental health disorders is the optimal…

Mental Health

27 Articles

Addiction Is a Community Disease

Partnering with the community to tackle substance use and mental health disorders is the optimal…

Coordinated Care

121 Articles

Ensuring Clinical Quality in Telemedicine

How Cleveland Clinic and Online Care Group designed a data-driven quality improvement telemedicine ecosystem.

Insights Council

Have a voice. Join other health care leaders effecting change, shaping tomorrow.

Apply Now