Leadership

Making Physician Feedback Reports More Effective

Article · January 3, 2018

A growing number and variety of health care organizations are developing physician feedback reports with the goal of promoting physician awareness of their individual performance on one or more metrics, relative to their peers or a normative standard, to support improvements in health care.

According to one survey, 80% of pediatricians have received at least one externally produced feedback report. We also know that 50% of noncommercial accountable care organizations (ACOs) and 63% of commercial ACOs produce feedback reports at the individual physician level and that the Centers for Medicare & Medicaid Services produces a Quality and Resource Use Report for group practices and solo practitioners. Systems, hospitals, and physician practices also produce internal feedback reports for affiliated physicians; one survey found that 75% of physicians are part of a system or provider organization that produces a feedback report.

Despite this proliferation of feedback, there is no clear sign that it has benefitted practice. Evidence indicates that feedback reports can result in improved performance. But just because physician feedback reports can work to improve performance does not mean they always do. Perhaps redesigning the reports would convey more clearly to physicians what is needed to improve their performance.

What Do We Know About How to Build a Physician Feedback Report That Works?

Fortunately, we know something about evidence-based report design features that promote physician engagement, performance awareness, and, ultimately, performance improvement. A report developed by one of the Agency for Healthcare Research and Quality’s (AHRQ’s) EvidenceNOW grantees North Carolina’s Heart Health Now! and their partner Community Care of North Carolina (CCNC) — illustrates evidence-based design features and serves as a model for other report developers.

While the model focuses on one clinical area (i.e., heart health in primary care settings), feedback reporting systems can be constructed around any set of measures, including a broad menu of clinical areas (e.g., preventive services, high-volume surgeries, etc.). Although we focus on physicians in the present discussion, the principles can be applied to feedback reports for any clinician, including nurse practitioners, physician assistants, and others. Feedback reports are an ideal vehicle to begin conversations with an entire care team.

Physician feedback reports are usually more effective when:

  • The targeted clinical measure can be influenced by changes in physician behavior. It is important for the physician to have control over one or more activities that positively or negatively influence the reported measure. For example, providing feedback about overall hospital performance may not be relevant or useful to individual physicians who work in only one unit of the hospital. The suite of measures tracked in the CCNC report (e.g., aspirin therapy) are those that can be controlled or influenced by the physician (Figure 1).
Figure 1 Physician Feedback Reports Quality Improvement Structure Clinical Measures Amenable to Physician Action

Figure 1. Clinical Measures Amenable to Physician Action. Screenshot showing some of the practice measures that are tracked in the CCNC report. Click To Enlarge.

Physician feedback reports are usually more effective when:

  • The physician understands that the targeted clinical measure or suite of measures is important. This understanding requires, at a minimum, that there is sufficient evidence to inform and compel the underlying clinical rationale. The associations between the suite of measures tracked in CCNC’s model report (e.g., aspirin therapy) and high-quality care are well supported by evidence. Including links in the report to the clinically relevant evidence undergirding the measures facilitates review by interested physicians and builds report credibility (Figure 2).
Figure 2 Physician Feedback Reports Quality Improvement Structure Embedded Links to Clinically Relevant Evidence

Figure 2. Embedded Links to Clinically Relevant Evidence. Screenshot showing links in the report to the underlying clinical evidence associated with the measures. Click To Enlarge.

Physician feedback reports are usually more effective when:

  • The report gives physicians flexibility to tailor output to their needs. Some physicians may want to see how their individual performance compares with a specific reference group of particular interest. Others may be interested in tracking only a subset of featured measures. Still others may want to excerpt data to use themselves, to share with others, or to incorporate into slide presentations. The easier it is for physicians to adapt feedback reports to meet their own needs, the more likely it is that the reports will succeed as tools for improvement. The CCNC report displays the menu from which physicians can select their output of interest, including measures aggregated at the network, organization, practice, or individual provider level (Figure 3, left).
Figure 3 Physician Feedback Reports Quality Improvement Structure Options for Tailored Output and Capacity to View Patient-Level Data

Figure 3. Options for Tailored Output and Capacity to View Patient-Level Data. Screenshot showing the menu that allows physicians to select their output of interest, such as measure scores for one or more reference groups or patient-specific measure scores that comprise the physician’s aggregated score, and showing patient-level data. Click To Enlarge.

Physician feedback reports are usually more effective when:

  • Actual performance is displayed alongside one or more comparator. To make sense out of their performance scores, physicians need some way to answer the question, “compared to what?” A frequently used comparator is the national, regional, or practice average. Another type of comparator is a benchmark, which connotes a normative standard. The CCNC report compares a physician’s performance to averages for the practice and for the organization (Figure 4, top right).
  • Goals are set for the target performance or behavior. Ideal goals are specific, measurable, achievable, relevant, and time bound. They can be linked to an average or benchmark measure of performance or can be expressed as a level of improvement (e.g., “screening rates to improve by 10%”). In the model report, the performance goal (80% of patients with blood pressure under control) is clearly displayed (Figure 4).
  • The format facilitates correct interpretation and highlights important patterns in performance. The use of graphics, formatting, and explanatory text can help to (1) communicate instances in which a physician’s performance is significantly different from the performance goal, and (2) show whether and how performance is changing over time. The model report shows not only how current performance compares with the goal, but also shows how performance has changed over time, with the time periods clearly marked (Figure 4, bottom right).
Figure 4 Physician Feedback Reports Quality Improvement Structure Inclusion of Peer Comparators, Trend, and Performance Goal

Figure 4. Inclusion of Peer Comparators, Trend, and Performance Goal. Screenshot showing a comparison of the data for the organization, practice, and physician in relation to the performance goal as well as a comparison of the of the physician trend data in relation to the performance goal. Click To Enlarge.

Physician feedback reports are usually more effective when:

  • Patient-level data easily can be accessed. The model report displays patient-specific data (e.g., LDL and HDL) (Figure 3).
  • Goal achievement is facilitated. If physicians are shown that their scores are low without guidance on how the scores can be improved, frustration may displace motivation to improve. Reports can minimize this tendency, for example, by enabling physicians to generate a list of their patients who did not meet the performance goal as illustrated in the model report (Figure 3). Such a list might identify a subset of patients (e.g., those whose blood pressure is >140/90) who can serve as a focus for improvement.
  • Feedback is anchored in the organization’s quality improvement infrastructure. It’s important for feedback reporting to have a home within the quality improvement infrastructure rather than being an isolated element that has to compete for physicians’ attention. Anchoring feedback within the overall quality improvement infrastructure will support report credibility among physicians, increase the likelihood of sufficient funding with dedicated resources, and mitigate unnecessary duplication of effort in terms of measurement and reporting. CCNC developed this model feedback report as part of its role in providing a quality improvement infrastructure for primary care practices in the state.

Dueling Feedback Reports

With the growth in the number and variety of organizations that develop feedback reports comes a significant challenge to the broader enterprise of feedback reporting and performance improvement. Physicians may receive multiple reports from different developers, such as their health care system or medical group, an affiliated ACO, their professional society, the Medicare program, the state Medicaid agency, a regional health care improvement collaborative, and the different health plans with which they contract.

There is no guarantee that reports produced by different developers will be aligned in focus or measure specification. The phenomenon of dueling feedback reports may diminish the visibility — and importance — of any single report, and, in the case of conflicting scores, may create confusion and undermine the credibility of feedback reporting.

Where Do We Go from Here?

It’s hard to imagine a successful quality improvement effort that is not anchored in valid and reliable performance data that are presented to physicians in a way that engages them. Physicians’ awareness of their own performance builds a critical and necessary foundation for improvement. To increase the effectiveness of feedback reports in driving performance improvement, however, more needs to be done on multiple fronts. More needs to be done by report developers, who should use available evidence in designing their reports and should test prototypes with a representative group of their physician audience. More needs to be done by the research community, which should actively synthesize findings about what works and, just as importantly, what doesn’t work, and should make findings available and understandable to report developers. Finally, more needs to be done by the funders of research to prioritize studies that will collectively advance the science of feedback reporting.

 

The findings and conclusions are those of the authors, who are responsible for its content, and do not necessarily represent the views of AHRQ. No statement in this report should be construed as an official position of AHRQ or the U.S. Department of Health and Human Services.

New call for submissions ­to NEJM Catalyst

Now inviting longform articles

Connect

A weekly email newsletter featuring the latest actionable ideas and practical innovations from NEJM Catalyst.

Learn More »

More From Leadership
Meyer01_header - Seven Challenges and Seven Potential Solutions for Large-Scale EHR Implementation

Seven Challenges and Seven Solutions for Large-Scale EHR Implementations

Salient lessons learned over multiple electronic health record implementations.

Zuckerberg San Francisco General Hospital ZSFGH A3 thinking Personal Development Plan A3 leader standard work improvement management example board

Changing Leadership Behavior Gets Real Results

Zuckerberg San Francisco General Hospital deployed its new leadership culture, which emphasizes staff decision-making, self-reflection, and clarity in defining problems and goals, to successfully address a crisis involving record-high patient volumes.

Khatri02_pullquote Connectors

The Crucial Role of Connectors in Large Health Care Organizations

Creating a truly collaborative community involves connecting the right people at the right time and in the right places.

Women of Impact Checklist - Advancing Workplace Equity

Lead In: Women of Impact in Health Care on Advancing Equity in the Workplace

Raising the standards of equity and wellness in our workplaces so we effectively advance health for the populations we serve.

Historical and Projected Numbers of Physicians, Nurse Practitioners, and Physician Assistants.

Growing Ranks of Advanced Practice Clinicians — Implications for the Physician Workforce

The number of NPs and PAs is growing rapidly, while physician supply has slowed. This research projects the number of NPs, PAs, and physicians through 2030.

IBM solutions to physician burnout roundtable participants: Christina Maslach, Paul DeChant, Tait Shanafelt, Namita Seth Mohta, Karen Weiner, Edward Prewitt

NEJM Catalyst Roundtable Report: Seeking Solutions to Physician Burnout

An NEJM Catalyst roundtable sponsored by IBM Watson Health brought together four experts, all deeply engaged in reducing physician burnout from different perspectives, to share in a robust discussion.

Pottharst01_pullquote - value-based health care leadership personas

Personas of Leadership in Value-Based Care

The deliberate nurturing of specific types of leadership personas seems to be a critical factor in the success of value-based care organizations.

Few Truly High-Performing Health Care Organizations

Survey Snapshot: What the High Performers Have to Say

NEJM Catalyst Insights Council members from high-performing institutions share their perspectives on what’s working and what needs improvement.

Morris-Singer01 pullquote clinician burnout community-building

Combating Clinician Burnout with Community-Building

Improving morale and reducing turnover with peer-support meetings and shared group email lists for clinicians.

Time Is What Matters Measure Figure D - Time Saved Compared to FY17 Average

Measuring Patient Quality of Life: Time Is What Matters

How Anne Arundel Health System created a meaningful measure for patients and providers.

Connect

A weekly email newsletter featuring the latest actionable ideas and practical innovations from NEJM Catalyst.

Learn More »

Topics

Team Care

98 Articles

Seven Challenges and Seven Solutions for…

Salient lessons learned over multiple electronic health record implementations.

Care Redesign Survey: Lessons Learned from…

Although care delivery models in rural and urban/suburban areas are distinct, by virtue of geographic…

Quality Management

159 Articles

Changing Leadership Behavior Gets Real Results

Zuckerberg San Francisco General Hospital deployed its new leadership culture, which emphasizes staff decision-making, self-reflection,…

Insights Council

Have a voice. Join other health care leaders effecting change, shaping tomorrow.

Apply Now