Coach, Don’t Just Teach

Article · January 17, 2019

Effective clinician-patient communication is essential for high-quality care and is linked to better patient adherence and greater satisfaction for both patients and clinicians. Direct one-on-one coaching has the potential to improve clinician-patient communication as well as clinician and patient satisfaction compared with other techniques commonly used. We tested its effectiveness in a randomized controlled trial of 62 clinicians at Duke University School of Medicine.

High-quality care depends on effective communication between clinicians and patients. Effective communication comprises several components of patient-centered communication,1 including exchanging information, enabling patient self-management, and managing emotions.2

Robust evidence links effective communication to important patient outcomes, such as better adherence to instructions, greater satisfaction,3 and fewer malpractice suits.4

Further, evidence shows links between communication and clinician satisfaction. Satisfied clinicians are less likely to depart from an already understaffed workforce and make fewer medical errors.5

Despite its importance, clinician communication until recently has not been formally taught6 in medical school or residency, though such training is becoming more widespread.7 Most attending physicians and advanced care practitioners rely on their natural aptitude for communication, which varies widely, along with their experience and perhaps some continuing education.

But there is room to grow. Clinicians can try to improve their communication skills via interventions like continuing medical education lectures and online materials, but these activities tend to focus on delivering information rather than on developing skills and are usually not potent enough to change behaviors.

Strategies with some impact on communication include face-to-face courses8 and interactive computer programs.9 Those with the greatest impact contain two critical components used in teaching communication: allowing clinicians to practice effective communication techniques and providing them with tailored feedback. Only with observed performance and feedback do clinicians get an accurate sense of their own behavior,10 what they already do well, and what they need to improve. Thus far, many programs that have improved clinician communication have not translated the effects of improved communication to significant improvements in patient outcomes.

A promising way to provide feedback that might also improve patient and clinician satisfaction is via communication coaching: shadowing clinicians and giving feedback. We conducted one study11 among 29 clinicians in two outpatient clinics showing that communication coaching improves skills, as well as patient and clinician satisfaction. The study we describe in the current article extends these findings to determine the impact of communication coaching on patient satisfaction, communication skills among inpatient (hospitalist) and outpatient (oncology) clinicians, and clinician satisfaction. We hypothesized that the coaching intervention would lead to improvement in each of these outcomes, compared with a control arm that received no instruction on improving communication.


For the past 5 years at Duke University School of Medicine, the coach for this study, who is one of its authors (Kathryn Pollak), has worked with nearly 100 clinicians working in the fields of oncology, internal medicine, family medicine, pediatrics, hospital medicine, endocrinology, palliative care, and surgery, with the objective to provide a more personal method of teaching communication and also one that does not take up a large amount of clinicians’ valuable time.

Trial Design

Using a protocol approved by the Duke University School of Medicine Institutional Review Board, the study team asked two divisions within the Department of Medicine to participate: Medical Oncology and Hospital Medicine. Both divisions are large (51 oncologists and 54 hospitalists) and treat patients with complex medical and psychosocial problems who have significant communication needs, and they allowed us to conduct the study in both an inpatient setting (hospitalists) and an outpatient setting (oncologists).

The Division Chiefs approved the study, and crucially, agreed to participate themselves. They emailed clinicians in their division to inform them of the study details and of their own intent to enroll as participants. The email instructed those who did not want to participate to opt out by emailing their refusal. The coach, Dr. Pollak, then contacted clinicians who did not opt out to obtain written consent. She emphasized that participating in the study would help them improve their skills without taking a lot of time. In total, oncologists would spend about 2 hours each with the coach, and hospitalists would spend about 4 hours each. Dr. Pollak found that some were apprehensive about having someone watch and critique their communication and had not received communication feedback since completing their training. She assured them that the critique was intended to help rather than evaluate and that she would include feedback on what they were doing right as well as areas for improvement. Of the clinicians approached, about 80% in each division agreed to participate. Those who did not agree felt they did not see enough patients, were leaving the practice, did not feel they had enough time, or were too new to Duke.

Study staff then emailed consenting participants a link to a survey that assessed clinician characteristics. Once clinicians completed the survey, study staff used a computer-generated randomization program to create intervention and control arms (Table 1), and emailed clinicians to inform them of their study arm assignment. For each division, study staff randomized half the clinicians to the intervention arm and the other half to a wait-list control arm in which they were first in the control arm and could choose to receive the intervention at the end of the study. (Of those in the wait-list control arm, six chose to receive the later coaching.) Those in the intervention arm were “paired” with a control arm clinician to ensure the same amount of time in between baseline and follow-up surveys.

Clinician Characteristics (n=62) of Duke Coaching Communication Skills Study

Table 1. Click To Enlarge.


A central tenet of adult learning pedagogy is that learners acquire skills most effectively when they focus on a few discrete skills at a time, while also receiving positive feedback for the skills they have mastered. Therefore, the coaching provided both skill improvement advice and positive feedback on the skills clinicians did well. Dr. Pollak delivered the coaching intervention in three steps:

Step 1

A 1-hour one-on-one meeting with each intervention-arm clinician to discuss challenging patient encounters and effective communication techniques. She did not enter the coaching trial with a specific skill she wanted to teach, but had many skills in her “toolkit” to address each clinician’s desired area of improvement. In Step 1, most clinicians reported that their most challenging conversations involved patient emotion. For instance, they reported struggling when patients seem to have unrealistic expectations or when they claim that they have not been fully informed of the course of treatment or side effects. The clinicians also discussed challenges with delivering serious news (e.g., test results or scans indicating progression of disease), which is an inherently emotion-laden conversation.

Step 2

The second step varied depending on whether the participant was an oncologist or a hospitalist.

Hospitalists. After Step 1, Dr. Pollak set up a time to shadow the hospitalist for 2 hours, observing two to three encounters. For each encounter, the clinician asked the patient and any caregivers or family members present to provide oral consent for the coach to be present. The clinician told patients the focus was on the clinician’s behavior, not the patient’s. The coach typed and coded communication behaviors as the hospitalist talked with the patient/caregivers (e.g., noting when the clinician reflected back what the patient said, demonstrating active listening skills). She provided immediate feedback to clinicians in between patients, lasting a minute or two. After the session, the coach emailed clinicians their coded transcripts as well as a summary of the feedback describing what they did well and what could be “tweaked.”

Oncologists. After Step 1, the coach asked clinicians to audio-record two of their more difficult encounters, where coaching would potentially benefit them the most. The coach provided a recorder and showed them how to operate it. For each encounter, clinicians asked the patient to provide oral consent that they record the encounter. Often, the coach needed to remind clinicians via email or text to record their encounters. The coach retrieved the recorder once the clinician had recorded two encounters, sent the files to be transcribed, and then reviewed the transcriptions while listening to the recordings. She then emailed clinicians their transcripts and scheduled a time to meet individually for 30 minutes to review the transcribed encounters. After the session, the coach emailed clinicians a summary of the things they handled well in the transcribed encounters and areas for improvement.

Step 3 (Both Specialties)

When the hospitalist was on service again (ranging from 2 weeks to 5 months), the coach shadowed a second time, reminding clinicians of the feedback given. When the oncology clinician audio-recorded two more encounters, the coach again listened to the audio recordings with the transcripts and provided feedback. The coach once more emailed transcripts and a summary of the feedback.


The emailed feedback for both specialties might look like this:

Super job, [name of clinician]! You are doing so many things well. Just a summary of our work together.

These are the fabulous things you do!

  1. Responding empathically when you see negative emotions.
  2. Making reflective statements to show you are listening.
  3. Asking open-ended questions.
  4. Praising patients and noting their strengths.
  5. Establishing rapport and meeting your patients where they are.
  6. Letting them talk without interrupting.
  7. Asking clarifying questions.
  8. Supporting their autonomy and their right to set their own goals.
  9. Asking permission before giving advice and information.

Things you can tweak:

  1. Name reluctance with an empathic statement followed by a clarifying question.
  2. Add words like “unfortunately” and “I wish things were different” when giving serious news. Also have a segue to serious topics with an empathic statement.

It’s been a joy working with you.

Recognizing and Responding to Emotion

In all steps of the coaching, Dr. Pollak taught two skills that are core to patient-centered communication and also identified by clinicians as areas in which they needed help: recognizing patient emotion and responding to negative emotion. The coach instructed clinicians how to identify negative emotions even when patients were not expressing them directly. For example, anxiety often prompts patients or caregivers to rapidly “pepper” clinicians with questions,12 and many questions that initially appear as medical actually represent negative emotions. “Are the tumors getting bigger?” represents fear or anxiety, and needs a response that addresses that fear, rather than a factual response about how many millimeters the tumors are currently.

Then, the coach gave clinicians suggested scripts for addressing patient emotion, with an emphasis on naming the emotion (e.g., “It might be scary to hear this news” or “I can see this news has made you sad”) and using wish statements (e.g., “I wish things were different” or “I wish I didn’t have to say this”). In the two examples of emotion noted above, the coach taught clinicians to refrain from simply answering the questions and instead to pause, name the emotion, and explore concerns (e.g., “You seem to be worried. What are your biggest concerns?”). Previous research showed that these ways of communicating improved patient trust in a randomized trial of oncologists conducted by the coach and her colleagues.9

The coach also taught clinicians to address all negative emotions immediately to help patients and caregivers feel heard right away. When they wait too long, the patient or caregiver can feel the emotion is unresolved and will continue to express it, often indirectly, in hopes that the clinician will respond empathically.12 This repeated attempt to get a response can frustrate both patients, caregivers, and clinicians. Moreover, patients or caregivers enveloped in negative emotion may not be able to fully comprehend the clinical information being discussed.


We assessed clinician self-reported age, gender, race, ethnicity, years since medical/physician assistant/nursing school, and prior communication training to describe the sample.

Patient Satisfaction

For clinicians in both arms, we assessed patient satisfaction using the Press Ganey questionnaire both before the study and after the study. The Press Ganey is a survey used routinely throughout Duke Health System that has been found to be reliable and valid.13 We considered attempting to survey the specific patients clinicians saw during the coaching; however, doing so would require obtaining written consent from patients, which we deemed logistically impractical. Instead, we used the standard patient satisfaction scale assessed among patients who had an encounter with a study clinician during the study time frame. For a sample of encounters, both inpatient and outpatient, Press Ganey sends patients a survey to rate their encounter. We abstracted Press Ganey ratings from patients cared for by all participating clinicians during the 3 months prior to the intervention and the 3 months following completion of the intervention. Not all clinicians were on service in those time frames or had data to abstract. Sample items include how well clinicians explained things, how well they listened, and how courteous they were.

Responses were categorized by percentages of patients who gave their providers a “9” or “10” on a 10-point scale where 10 is the best rating, or a rating of “Always” when patients’ options were “Always,” “Usually,” “Sometimes,” and “Never.” Each clinician receives a summary score for all patients seen during the study time frame; thus, each clinician only has one score. Table 2 and Figure 1 show that Press Ganey scores improved consistently across most of the domains for patients seen by intervention clinicians and worsened for patients seen by control clinicians, leading to an average difference between intervention and control of 11% across all domains. Hospitalists and oncologists are listed separately because the survey questions differ slightly in the two settings and because ceiling effects among oncologists (discussed in more detail below) might mask differences among hospitalists.

Patient Satisfaction Ratings by Arm and Subspecialty - Pre- and Post-Intervention (n=55 Clinicians) - Duke Coaching Communication Skills Study

Table 2. Click To Enlarge.

Among hospitalists, baseline favorable Press Ganey scores for clinicians ranged from 65% to 89%.

Among oncologists, baseline favorable Press Ganey scores ranged from 89% to 98% (reflecting the ceiling effect mentioned above). We did not see changes among patients seen by intervention and control oncology clinicians.

Percent in Highest Bracket in Patient Satisfaction Scores - Pre-Post Arm Differences for Hospitalists - Duke Coaching Communication Skills Study

Figure 1. Click To Enlarge.

We saw the biggest differences among hospitalists’ patients with regard to “treating with courtesy and respect” (16% difference between arms) and overall ratings of the hospital (18%) and communication with the clinician (10%). The only difference found between patients seen by oncologists in the intervention versus control arms was the overall rating of the oncologist (6% difference).

Clinician Communication Skills

Because the coach only shadowed or audio-recorded intervention clinicians, we only have objective data of clinician communication skills in the intervention arm. This assessment was not feasible in the control arm as we did not audio-record or shadow encounters for control clinicians to avoid contamination. Among all intervention clinicians, we objectively assessed their communication skills by coding encounters with regard to response to patient negative emotion, using Suchman’s definition of empathic opportunities and responses.14 We defined empathic opportunities as patients’ expressions of negative emotions (e.g., “Oh no. I was hoping you would not say that” or “I’m really worried the cancer has come back”). We coded clinician responses as empathic “continuers” or “terminators,” based on whether they encouraged further discussion or tended to close it off.  Continuers included five specific behaviors organized under the mnemonic “NURSE”: Name, Understand, Respect, Support, and Explore.15–17 Terminators included all other responses, but none of the NURSE behaviors. We created an “empathy ratio” where the denominator was all opportunities to respond empathically and the numerator was empathic responses.

Clinicians from both specialties assigned to the intervention arm showed objective improvement in their responses to negative emotions. Compared with the rate at which they responded empathically to negative emotion in the pre-intervention encounters (49%), clinicians had a higher rate in post-intervention encounters (66%), meaning they responded empathically to 17% more of the negative emotions. This improvement was slightly higher in the oncologist group (20%) compared with the hospitalist group (14%; See Figure 2).

Pre and Post Percentages of Clinician Empathic Responses When Patients Expressed Negative Emotion - Duke Coaching Communication Skills Study

Figure 2. Click To Enlarge.

Clinician Satisfaction

For clinicians in both arms, we assessed clinician satisfaction pre- and post-intervention using the Maslach Burnout Inventory,18 which has three subscales: emotional exhaustion, depersonalization, and personal accomplishment. Sample items read, “I feel fatigued when I get up in the morning,” “I’ve become more callous toward people since I took this job,” and “I feel I’m positively influencing other people’s lives through my work.” We did not find arm differences in clinician satisfaction scores (Table 3). There was little change in clinician satisfaction in either arm although all small differences favored the intervention arm.

Clinician Satisfaction by Arm (Maslach Burnout Inventory) (n=62) - Duke Coaching Communication Skills Study

Table 3. Click To Enlarge.

Clinician Ratings of Intervention

Finally, we assessed intervention clinicians’ perceptions of how the intervention affected them: whether they changed clinical practice as a result of coaching, whether coaching was worth their time, and would they recommend coaching to a colleague.

Clinicians in the intervention arm rated coaching very highly (Table 4). Most (90%) reported that they had made changes in their clinical practice as a result of coaching. They reported that the coaching made them more effective, that it would assist with challenging conversations, that the coaching represented effective communication, and that it was worth their time. Most would recommend it to a colleague.

Clinician Evaluation of Intervention (n=30) - Duke Coaching Communication Skills Study

Table 4. Click To Enlarge.

Anecdotally, one clinician stated, “I think about what you taught me every time I enter a room. You said that I had an opportunity in recognizing ‘the elephant in the room’ in terms of addressing the anxiety that patients and their loved ones have about their illness and hospitalization. You recommended that I transparently and clearly identify that by using the phrase ‘you seem anxious.’” Others commented that it was helpful to have “rapid, positive feedback and good to know what ‘I did right.’ Also good to get gentle instruction on how to improve, where opportunities arise in the interaction with patients. Ability to recognize emotional questions and anxiety.” Several commented about how hard it was to remember to audio-record their encounters in their busy clinics.

Discussion and Conclusion

We found that the coaching improved patient satisfaction ratings and clinician communication skills. Coaching represents a method of teaching that requires little clinician time and seems to have a positive impact. This is consistent with a recently published study that showed that four coaching sessions had a greater impact on oncologist communication than just one.19

Although the way patient satisfaction is scored did not allow statistical tests, we found as high as an 18% increase in one of the ratings, which represents a significant shift in these hard-to-move measures.13 More improvements occurred among hospitalist patients than among oncology patients because the baseline scores for oncologists were already very high. Others have reported ceiling effects in patient ratings of oncologists as people often do not feel they can criticize their oncologists given the gravity of their disease and the important role the oncologist plays.20 Even with ceiling effects, patients of oncologists who received the coaching maintained their high rating of oncologists whereas patients of control oncologists reported less satisfaction. We had low response for the Press Ganey survey, which precluded more extensive analyses. Given the cumbersome nature of the Press Ganey survey21 (59 questions), few patients complete it (25–28%), which is similar to other institutions. Further, others have noted problems with ceiling effects for this measure. However, the Press Ganey represents the assessment most health systems use for assessing patient satisfaction in inpatient and outpatient settings. Finding improvements in this hard-to-move measure represents a strong signal of the positive impact of coaching.

We objectively assessed communication skills by measuring an important component of those skills: response to negative emotion.14 Clinicians who received the coaching recognized and responded to patient negative emotion more frequently after the coaching than before. This study may have underestimated this intervention effect because of the nature of the intervention delivery: an initial face-to-face meeting before baseline assessment of communication skills. In that first meeting, the coach talked about the importance of addressing emotion and taught skills, thus introducing intervention elements before the baseline assessment. In addition, “pre-intervention encounters” with hospitalists included the coach providing input after each patient. Hospitalists likely were already improving during the pre-intervention encounters, thus potentially inflating the baseline scores. Evaluating one or two encounters before an initial face-to-face meeting, and without any feedback (that is, a genuine pre-intervention evaluation), would likely result in a more dramatic contrast between pre- and post-intervention, but even in the context of a research study, we felt we needed to prioritize building rapport with clinicians and not make them uncomfortable with the intervention by “ambushing” them with an evaluation for which they had not been prepared.

Clinicians who received the coaching gave favorable ratings to this intervention. Clinicians are busy, and most have not received feedback on their communication since residency or fellowship. Understandably, some were somewhat anxious about being observed and coached. To address this concern, the coach reassured the clinicians that she would let them know what they were doing well, in addition to the things they could “tweak.” Anecdotally, clinicians responded exceedingly well when they were praised. This approach might represent somewhat of a culture shift because clinicians often expect to be told they are not meeting expectations and need to do better. Overwhelmingly, clinicians self-reported that the coaching was helpful in making their communication with patients more effective.

We did not find differences in clinician satisfaction over the course of this trial, in contrast to the differences we have noticed in our other work with clinicians, and in the outpatient clinic study we cited previously. One possible reason is that the coaching intervention delivered during the trial was considerably less intensive than in the previous study and in our usual work. We intentionally made it less intensive to fit into the busy clinicians’ work schedules. In the previous study, coaches shadowed clinicians multiple times for the whole afternoon and facilitated monthly all-staff meetings. The clinics in our previous study also were primary care clinics, which differ greatly from oncology clinics and hospital inpatient medicine. System-level factors might have more of an influence on clinician satisfaction than the coaching could address. For example, the coaching does not address, and therefore would not change, dissatisfaction related to patient load, challenges of the electronic health record, providing clinical care while monitoring and instructing trainees, and other demands made of these clinicians. Further, for oncologists, the intervention was more work than for the hospitalists because they had to remember to audio-record their encounters.

Strengths of this study include the randomized, controlled design; a large number of participating clinicians; inclusion of both inpatient and outpatient settings; inclusion of an objective measure of communication skills via direct observation or audio-recording of encounters; use of a widely-used, health system–wide tool for assessing patient satisfaction; and use of an experienced communications coach. This study also has limitations that should be considered. First, only intervention clinicians audio-recorded or were observed during their encounters. Thus, we cannot make comparisons with changes in objective communication skills among control clinicians. Past studies have shown that communication rarely improves without intervention; however, we cannot make this claim. Second, we relied on Press Ganey surveys for patient satisfaction. These surveys notoriously have a low response rate, which affected our ability to conduct inferential statistics despite the large number of clinicians in the trial. Surveying patients directly might have captured a more representative sample. And third, there is currently no standard training for communication coaching known to replicate the effects of our coach, which is an essential tool for dissemination and implementation of these findings.

In conclusion, this relatively low-intensity coaching intervention improved patient satisfaction and clinician communication. Clinicians found the coaching to be acceptable and helpful. Moving this work toward implementation requires a fully-powered trial that directly assesses patient satisfaction and other patient-centered outcomes and objectively assesses communication skills in a control group.


Acknowledgements: We also would like to thank these clinicians for their participation (listed in alphabetical order): James Abbruzzese, MD, Andrew Armstrong, MD, Joseph Brogan, MD, George Cheely Jr., MD, Saumil Chudgar, MD, Dana Clifton, MD, Margaret Deutsch, MD, Colby Feeney, MD, Stéphanie Gaillard, MD, David Gallagher, MD, Daniel George, MD, Aubrey Jolly Graham, MD, Brian Griffith, MD, Elizabeth Hankollari, MD, Michael Harrison, MD, Thomas Holland, MD, Aparna Kamath, MD, Gretchen Kimmick, MD, Joanna Kipnes, MD, Margot O’Neill, NP, David Mack, MD, David Ming, MD, Michael Morse, MD, Katherine Neal, MD, Cara O’Brien, MD, Christina Page, MSN, RN, Snehal Patel, MD, Rebecca Phillips, MSN, Richard Riedel, MD, Adia Ross, MD, April Salama, MD, Noppon Setji, MD, Suchita Shah, MD, Stephen Telloni, MD, Kristina Tourville, RN, Lisa Vann, MD, John Yeats, MD, Kelly Young, DNP, and Yousuf Zafar, MD.


List of Supplemental Digital Content

Supplemental Digital Content 1. REDCap survey provided to physicians. PDF
Supplemental Digital Content 2. Press Ganey survey provided to patients. PDF


1. Reeve BB, Thissen DM, Bann CM, et al. Psychometric evaluation and design of patient-centered communication measures for cancer care settings. Patient education and counseling 2017;100:1322-8.

2. RM E, Street RL J. Patient-Centered Communication in Cancer Care: Promoting Healing and Reducing Suffering. Bethesda, MD: NIH; 2007.

3. Bertakis KD, Roter D, Putnam SM. The relationship of physician medical interview style to patient satisfaction. The Journal of family practice 1991;32:175-81.

4. Levinson W, Roter DL, Mullooly JP, Dull VT, Frankel RM. Physician-patient communication. The relationship with malpractice claims among primary care physicians and surgeons. Jama 1997;277:553-9.

5. West CP, Huschka MM, Novotny PJ, et al. Association of perceived medical errors with resident distress and empathy: a prospective longitudinal study. Jama 2006;296:1071-8.

6. Ha JF, Longnecker N. Doctor-patient communication: a review. The Ochsner journal 2010;10:38-43.

7. Back to Bedside Projects Energize Residents, Improve Patient Care. Association of American Medical Colleges, 2018. at https://news.aamc.org/patient-care/article/back-to-bedside-improves-patient-care/.)

8. Back AL, Arnold RM, Baile WF, et al. Efficacy of communication skills training for giving bad news and discussing transitions to palliative care. Archives of internal medicine 2007;167:453-60.

9. Tulsky JA, Arnold RM, Alexander SC, et al. Enhancing communication between oncologists and patients with a computer-based training program: a randomized trial. Annals of internal medicine 2011;155:593-601.

10. Davis DA, Mazmanian PE, Fordis M, Van Harrison R, Thorpe KE, Perrier L. Accuracy of physician self-assessment compared with observed measures of competence: a systematic review. Jama 2006;296:1094-102.

11. Pollak KI, Nagy P, Bigger J, et al. Effect of teaching motivational interviewing via communication coaching on clinician and patient satisfaction in primary care and pediatric obesity-focused offices. Patient education and counseling 2016;99:300-3.

12. Kennifer SL, Alexander SC, Pollak KI, et al. Negative emotions in cancer care: do oncologists’ responses depend on severity and type of emotion? Patient education and counseling 2009;76:51-6.

13. Presson AP, Zhang C, Abtahi AM, Kean J, Hung M, Tyser AR. Psychometric properties of the Press Ganey(R) Outpatient Medical Practice Survey. Health and quality of life outcomes 2017;15:32.

14. Suchman AL, Markakis K, Beckman HB, Frankel R. A model of empathic communication in the medical interview. Jama 1997;277:678-82.

15. Fischer G, Tulsky J, Arnold R. Communicating a poor prognosis. In: Portenoy R, Bruera E, eds. Topics in Palliative Care. New York: Oxford University Press; 2000.

16. Smith R, Hoppe R. The patient’s story: integrating the patient- and physician-centered approaches to interviewing. Ann Intern Med 1991;115:470-7.

17. Tulsky J. Doctor-patient communication issues. In: Cassel C, Leipzig R, Cohen H, al. e, eds. Geriatric Medicine. 4th ed. New York, NY: Springer; 2004:287-97.

18. Maslach C, Jackson SE, Leiter MP. Maslach Burnout Inventory. Third ed: Consulting Psychologists Press; 1997:191-218.

19. Niglio de Figueiredo M, Krippeit L, Ihorst G, et al. ComOn-Coaching: The effect of a varied number of coaching sessions on transfer into clinical practice following communication skills training in oncology: Results of a randomized controlled trial. PloS one 2018;13:e0205315.

20. Fallowfield L. The ideal consultation. British journal of hospital medicine 1992;47:364-7.
21. Tyser AR, Abtahi AM, McFadden M, Presson AP. Evidence of non-response bias in the Press Ganey patient satisfaction survey. BMC health services research 2016;16:350.

Call for submissions:

Now inviting expert articles, longform articles, and case studies for peer review


A weekly email newsletter featuring the latest actionable ideas and practical innovations from NEJM Catalyst.

Learn More »

More From Leadership
Comparison of EHR Use Measures by Physician Gender

Differences in Ambulatory EHR Use Patterns for Male vs. Female Physicians

UCSF Health found that women providers spent more time in the EHR and documented longer notes on a per-wRVU basis, possibly contributing to greater burnout.

Fiscus01_pullquote - humanizing physician performance review

Humanizing the Annual Physician Performance Review

Transforming the review process from a punitive, deflating experience to a valuable one that strengthens the relationship between physician and organization.

People Believe Strongly That Leadership Can Be Taught

Leadership Survey: Leadership Skills Are Teachable and Vital

Leadership is teachable, and leadership development and training are important, according to our survey on the topic. Yet the same survey reveals that more than half of respondents think their organizations’ efforts to develop and train leaders are lacking in quality and time commitment.

A Preliminary Model of Determinants and Consequences of Unhurried Conversations with Patients

Careful and Kind Care Requires Unhurried Conversations

Health care providers must have time to know their patients in “high definition” to best meet their needs.

From the Commonwealth to Obamacare: Reflections on 10+ Years of Expanding Health Insurance Coverage

The former Executive Director of the Commonwealth Health Insurance Connector — a model for the Affordable Care Act and other state marketplaces — reflects on what worked, what didn’t, and what could be done differently in both Massachusetts and at the federal level.

Time Spent Engaging Directly with 16 Camden RESET Participants or Coordinating Care on Their Behalf

“Putting All the Pieces Back”: Lessons from a Health Care–Led Jail Reentry Pilot

The Camden Coalition’s jail-based reentry program illuminated the necessity and challenges of engaging people with complex health and social needs and helping to transform the systems that serve them.

Sands01_pullquote clinical research partnership for learning health care

Real-World Advice for Generating Real-World Evidence

If envisioned and implemented properly, a partnership between clinical delivery systems and clinical research programs can get us closer to the goal of achieving learning within the care continuum and discovering evidence that is available when it is needed.

The Largest Share of Organizations Do Not Have a Formal Strategy for Clinician Engagement

Leadership Survey: Why Clinicians Are Not Engaged, and What Leaders Must Do About It

Clinician engagement is vital for improving clinical quality and patient satisfaction, as well as the job satisfaction of clinicians themselves. Yet nearly half of health care organizations are not very effective or not at all effective at clinician engagement.

Rowe01_pullquote - clinician well-being - fighting clinician burnout and creating culture of wellness takes all stakeholders

Defending the Term “Burnout”: A Useful Tool in the Quest to Ease Clinician Suffering

Health care leaders must take a preemptive approach to clinician well-being that is supported by all stakeholders and prioritized on an equal footing with essential clinical and financial measures.

Screenshot from the NewYork Quality Care Chronic Condition Dashboard

Success in a Hospital-Integrated Accountable Care Organization

How NewYork Quality Care achieved shared savings — by strengthening collaboration, enhancing care management with telehealth, and transparently sharing performance data.


A weekly email newsletter featuring the latest actionable ideas and practical innovations from NEJM Catalyst.

Learn More »


Leading Transformation

284 Articles

From the Commonwealth to Obamacare: Reflections…

The former Executive Director of the Commonwealth Health Insurance Connector — a model for the…

Physician Burnout

54 Articles

Differences in Ambulatory EHR Use Patterns…

UCSF Health found that women providers spent more time in the EHR and documented longer…

From the Commonwealth to Obamacare: Reflections…

The former Executive Director of the Commonwealth Health Insurance Connector — a model for the…

Insights Council

Have a voice. Join other health care leaders effecting change, shaping tomorrow.

Apply Now