Effective clinician-patient communication is essential for high-quality care and is linked to better patient adherence and greater satisfaction for both patients and clinicians. Direct one-on-one coaching has the potential to improve clinician-patient communication as well as clinician and patient satisfaction compared with other techniques commonly used. We tested its effectiveness in a randomized controlled trial of 62 clinicians at Duke University School of Medicine.
High-quality care depends on effective communication between clinicians and patients. Effective communication comprises several components of patient-centered communication,1 including exchanging information, enabling patient self-management, and managing emotions.2
Further, evidence shows links between communication and clinician satisfaction. Satisfied clinicians are less likely to depart from an already understaffed workforce and make fewer medical errors.5
Despite its importance, clinician communication until recently has not been formally taught6 in medical school or residency, though such training is becoming more widespread.7 Most attending physicians and advanced care practitioners rely on their natural aptitude for communication, which varies widely, along with their experience and perhaps some continuing education.
But there is room to grow. Clinicians can try to improve their communication skills via interventions like continuing medical education lectures and online materials, but these activities tend to focus on delivering information rather than on developing skills and are usually not potent enough to change behaviors.
Strategies with some impact on communication include face-to-face courses8 and interactive computer programs.9 Those with the greatest impact contain two critical components used in teaching communication: allowing clinicians to practice effective communication techniques and providing them with tailored feedback. Only with observed performance and feedback do clinicians get an accurate sense of their own behavior,10 what they already do well, and what they need to improve. Thus far, many programs that have improved clinician communication have not translated the effects of improved communication to significant improvements in patient outcomes.
A promising way to provide feedback that might also improve patient and clinician satisfaction is via communication coaching: shadowing clinicians and giving feedback. We conducted one study11 among 29 clinicians in two outpatient clinics showing that communication coaching improves skills, as well as patient and clinician satisfaction. The study we describe in the current article extends these findings to determine the impact of communication coaching on patient satisfaction, communication skills among inpatient (hospitalist) and outpatient (oncology) clinicians, and clinician satisfaction. We hypothesized that the coaching intervention would lead to improvement in each of these outcomes, compared with a control arm that received no instruction on improving communication.
For the past 5 years at Duke University School of Medicine, the coach for this study, who is one of its authors (Kathryn Pollak), has worked with nearly 100 clinicians working in the fields of oncology, internal medicine, family medicine, pediatrics, hospital medicine, endocrinology, palliative care, and surgery, with the objective to provide a more personal method of teaching communication and also one that does not take up a large amount of clinicians’ valuable time.
Using a protocol approved by the Duke University School of Medicine Institutional Review Board, the study team asked two divisions within the Department of Medicine to participate: Medical Oncology and Hospital Medicine. Both divisions are large (51 oncologists and 54 hospitalists) and treat patients with complex medical and psychosocial problems who have significant communication needs, and they allowed us to conduct the study in both an inpatient setting (hospitalists) and an outpatient setting (oncologists).
The Division Chiefs approved the study, and crucially, agreed to participate themselves. They emailed clinicians in their division to inform them of the study details and of their own intent to enroll as participants. The email instructed those who did not want to participate to opt out by emailing their refusal. The coach, Dr. Pollak, then contacted clinicians who did not opt out to obtain written consent. She emphasized that participating in the study would help them improve their skills without taking a lot of time. In total, oncologists would spend about 2 hours each with the coach, and hospitalists would spend about 4 hours each. Dr. Pollak found that some were apprehensive about having someone watch and critique their communication and had not received communication feedback since completing their training. She assured them that the critique was intended to help rather than evaluate and that she would include feedback on what they were doing right as well as areas for improvement. Of the clinicians approached, about 80% in each division agreed to participate. Those who did not agree felt they did not see enough patients, were leaving the practice, did not feel they had enough time, or were too new to Duke.
Study staff then emailed consenting participants a link to a survey that assessed clinician characteristics. Once clinicians completed the survey, study staff used a computer-generated randomization program to create intervention and control arms (Table 1), and emailed clinicians to inform them of their study arm assignment. For each division, study staff randomized half the clinicians to the intervention arm and the other half to a wait-list control arm in which they were first in the control arm and could choose to receive the intervention at the end of the study. (Of those in the wait-list control arm, six chose to receive the later coaching.) Those in the intervention arm were “paired” with a control arm clinician to ensure the same amount of time in between baseline and follow-up surveys.
A central tenet of adult learning pedagogy is that learners acquire skills most effectively when they focus on a few discrete skills at a time, while also receiving positive feedback for the skills they have mastered. Therefore, the coaching provided both skill improvement advice and positive feedback on the skills clinicians did well. Dr. Pollak delivered the coaching intervention in three steps:
A 1-hour one-on-one meeting with each intervention-arm clinician to discuss challenging patient encounters and effective communication techniques. She did not enter the coaching trial with a specific skill she wanted to teach, but had many skills in her “toolkit” to address each clinician’s desired area of improvement. In Step 1, most clinicians reported that their most challenging conversations involved patient emotion. For instance, they reported struggling when patients seem to have unrealistic expectations or when they claim that they have not been fully informed of the course of treatment or side effects. The clinicians also discussed challenges with delivering serious news (e.g., test results or scans indicating progression of disease), which is an inherently emotion-laden conversation.
The second step varied depending on whether the participant was an oncologist or a hospitalist.
Hospitalists. After Step 1, Dr. Pollak set up a time to shadow the hospitalist for 2 hours, observing two to three encounters. For each encounter, the clinician asked the patient and any caregivers or family members present to provide oral consent for the coach to be present. The clinician told patients the focus was on the clinician’s behavior, not the patient’s. The coach typed and coded communication behaviors as the hospitalist talked with the patient/caregivers (e.g., noting when the clinician reflected back what the patient said, demonstrating active listening skills). She provided immediate feedback to clinicians in between patients, lasting a minute or two. After the session, the coach emailed clinicians their coded transcripts as well as a summary of the feedback describing what they did well and what could be “tweaked.”
Oncologists. After Step 1, the coach asked clinicians to audio-record two of their more difficult encounters, where coaching would potentially benefit them the most. The coach provided a recorder and showed them how to operate it. For each encounter, clinicians asked the patient to provide oral consent that they record the encounter. Often, the coach needed to remind clinicians via email or text to record their encounters. The coach retrieved the recorder once the clinician had recorded two encounters, sent the files to be transcribed, and then reviewed the transcriptions while listening to the recordings. She then emailed clinicians their transcripts and scheduled a time to meet individually for 30 minutes to review the transcribed encounters. After the session, the coach emailed clinicians a summary of the things they handled well in the transcribed encounters and areas for improvement.
Step 3 (Both Specialties)
When the hospitalist was on service again (ranging from 2 weeks to 5 months), the coach shadowed a second time, reminding clinicians of the feedback given. When the oncology clinician audio-recorded two more encounters, the coach again listened to the audio recordings with the transcripts and provided feedback. The coach once more emailed transcripts and a summary of the feedback.
The emailed feedback for both specialties might look like this:
Super job, [name of clinician]! You are doing so many things well. Just a summary of our work together.
These are the fabulous things you do!
- Responding empathically when you see negative emotions.
- Making reflective statements to show you are listening.
- Asking open-ended questions.
- Praising patients and noting their strengths.
- Establishing rapport and meeting your patients where they are.
- Letting them talk without interrupting.
- Asking clarifying questions.
- Supporting their autonomy and their right to set their own goals.
- Asking permission before giving advice and information.
Things you can tweak:
- Name reluctance with an empathic statement followed by a clarifying question.
- Add words like “unfortunately” and “I wish things were different” when giving serious news. Also have a segue to serious topics with an empathic statement.
It’s been a joy working with you.
Recognizing and Responding to Emotion
In all steps of the coaching, Dr. Pollak taught two skills that are core to patient-centered communication and also identified by clinicians as areas in which they needed help: recognizing patient emotion and responding to negative emotion. The coach instructed clinicians how to identify negative emotions even when patients were not expressing them directly. For example, anxiety often prompts patients or caregivers to rapidly “pepper” clinicians with questions,12 and many questions that initially appear as medical actually represent negative emotions. “Are the tumors getting bigger?” represents fear or anxiety, and needs a response that addresses that fear, rather than a factual response about how many millimeters the tumors are currently.
Then, the coach gave clinicians suggested scripts for addressing patient emotion, with an emphasis on naming the emotion (e.g., “It might be scary to hear this news” or “I can see this news has made you sad”) and using wish statements (e.g., “I wish things were different” or “I wish I didn’t have to say this”). In the two examples of emotion noted above, the coach taught clinicians to refrain from simply answering the questions and instead to pause, name the emotion, and explore concerns (e.g., “You seem to be worried. What are your biggest concerns?”). Previous research showed that these ways of communicating improved patient trust in a randomized trial of oncologists conducted by the coach and her colleagues.9
The coach also taught clinicians to address all negative emotions immediately to help patients and caregivers feel heard right away. When they wait too long, the patient or caregiver can feel the emotion is unresolved and will continue to express it, often indirectly, in hopes that the clinician will respond empathically.12 This repeated attempt to get a response can frustrate both patients, caregivers, and clinicians. Moreover, patients or caregivers enveloped in negative emotion may not be able to fully comprehend the clinical information being discussed.
We assessed clinician self-reported age, gender, race, ethnicity, years since medical/physician assistant/nursing school, and prior communication training to describe the sample.
For clinicians in both arms, we assessed patient satisfaction using the Press Ganey questionnaire both before the study and after the study. The Press Ganey is a survey used routinely throughout Duke Health System that has been found to be reliable and valid.13 We considered attempting to survey the specific patients clinicians saw during the coaching; however, doing so would require obtaining written consent from patients, which we deemed logistically impractical. Instead, we used the standard patient satisfaction scale assessed among patients who had an encounter with a study clinician during the study time frame. For a sample of encounters, both inpatient and outpatient, Press Ganey sends patients a survey to rate their encounter. We abstracted Press Ganey ratings from patients cared for by all participating clinicians during the 3 months prior to the intervention and the 3 months following completion of the intervention. Not all clinicians were on service in those time frames or had data to abstract. Sample items include how well clinicians explained things, how well they listened, and how courteous they were.
Responses were categorized by percentages of patients who gave their providers a “9” or “10” on a 10-point scale where 10 is the best rating, or a rating of “Always” when patients’ options were “Always,” “Usually,” “Sometimes,” and “Never.” Each clinician receives a summary score for all patients seen during the study time frame; thus, each clinician only has one score. Table 2 and Figure 1 show that Press Ganey scores improved consistently across most of the domains for patients seen by intervention clinicians and worsened for patients seen by control clinicians, leading to an average difference between intervention and control of 11% across all domains. Hospitalists and oncologists are listed separately because the survey questions differ slightly in the two settings and because ceiling effects among oncologists (discussed in more detail below) might mask differences among hospitalists.
Among hospitalists, baseline favorable Press Ganey scores for clinicians ranged from 65% to 89%.
Among oncologists, baseline favorable Press Ganey scores ranged from 89% to 98% (reflecting the ceiling effect mentioned above). We did not see changes among patients seen by intervention and control oncology clinicians.
We saw the biggest differences among hospitalists’ patients with regard to “treating with courtesy and respect” (16% difference between arms) and overall ratings of the hospital (18%) and communication with the clinician (10%). The only difference found between patients seen by oncologists in the intervention versus control arms was the overall rating of the oncologist (6% difference).
Clinician Communication Skills
Because the coach only shadowed or audio-recorded intervention clinicians, we only have objective data of clinician communication skills in the intervention arm. This assessment was not feasible in the control arm as we did not audio-record or shadow encounters for control clinicians to avoid contamination. Among all intervention clinicians, we objectively assessed their communication skills by coding encounters with regard to response to patient negative emotion, using Suchman’s definition of empathic opportunities and responses.14 We defined empathic opportunities as patients’ expressions of negative emotions (e.g., “Oh no. I was hoping you would not say that” or “I’m really worried the cancer has come back”). We coded clinician responses as empathic “continuers” or “terminators,” based on whether they encouraged further discussion or tended to close it off. Continuers included five specific behaviors organized under the mnemonic “NURSE”: Name, Understand, Respect, Support, and Explore.15–17 Terminators included all other responses, but none of the NURSE behaviors. We created an “empathy ratio” where the denominator was all opportunities to respond empathically and the numerator was empathic responses.
Clinicians from both specialties assigned to the intervention arm showed objective improvement in their responses to negative emotions. Compared with the rate at which they responded empathically to negative emotion in the pre-intervention encounters (49%), clinicians had a higher rate in post-intervention encounters (66%), meaning they responded empathically to 17% more of the negative emotions. This improvement was slightly higher in the oncologist group (20%) compared with the hospitalist group (14%; See Figure 2).
For clinicians in both arms, we assessed clinician satisfaction pre- and post-intervention using the Maslach Burnout Inventory,18 which has three subscales: emotional exhaustion, depersonalization, and personal accomplishment. Sample items read, “I feel fatigued when I get up in the morning,” “I’ve become more callous toward people since I took this job,” and “I feel I’m positively influencing other people’s lives through my work.” We did not find arm differences in clinician satisfaction scores (Table 3). There was little change in clinician satisfaction in either arm although all small differences favored the intervention arm.
Clinician Ratings of Intervention
Finally, we assessed intervention clinicians’ perceptions of how the intervention affected them: whether they changed clinical practice as a result of coaching, whether coaching was worth their time, and would they recommend coaching to a colleague.
Clinicians in the intervention arm rated coaching very highly (Table 4). Most (90%) reported that they had made changes in their clinical practice as a result of coaching. They reported that the coaching made them more effective, that it would assist with challenging conversations, that the coaching represented effective communication, and that it was worth their time. Most would recommend it to a colleague.
Anecdotally, one clinician stated, “I think about what you taught me every time I enter a room. You said that I had an opportunity in recognizing ‘the elephant in the room’ in terms of addressing the anxiety that patients and their loved ones have about their illness and hospitalization. You recommended that I transparently and clearly identify that by using the phrase ‘you seem anxious.’” Others commented that it was helpful to have “rapid, positive feedback and good to know what ‘I did right.’ Also good to get gentle instruction on how to improve, where opportunities arise in the interaction with patients. Ability to recognize emotional questions and anxiety.” Several commented about how hard it was to remember to audio-record their encounters in their busy clinics.
Discussion and Conclusion
We found that the coaching improved patient satisfaction ratings and clinician communication skills. Coaching represents a method of teaching that requires little clinician time and seems to have a positive impact. This is consistent with a recently published study that showed that four coaching sessions had a greater impact on oncologist communication than just one.19
Although the way patient satisfaction is scored did not allow statistical tests, we found as high as an 18% increase in one of the ratings, which represents a significant shift in these hard-to-move measures.13 More improvements occurred among hospitalist patients than among oncology patients because the baseline scores for oncologists were already very high. Others have reported ceiling effects in patient ratings of oncologists as people often do not feel they can criticize their oncologists given the gravity of their disease and the important role the oncologist plays.20 Even with ceiling effects, patients of oncologists who received the coaching maintained their high rating of oncologists whereas patients of control oncologists reported less satisfaction. We had low response for the Press Ganey survey, which precluded more extensive analyses. Given the cumbersome nature of the Press Ganey survey21 (59 questions), few patients complete it (25–28%), which is similar to other institutions. Further, others have noted problems with ceiling effects for this measure. However, the Press Ganey represents the assessment most health systems use for assessing patient satisfaction in inpatient and outpatient settings. Finding improvements in this hard-to-move measure represents a strong signal of the positive impact of coaching.
We objectively assessed communication skills by measuring an important component of those skills: response to negative emotion.14 Clinicians who received the coaching recognized and responded to patient negative emotion more frequently after the coaching than before. This study may have underestimated this intervention effect because of the nature of the intervention delivery: an initial face-to-face meeting before baseline assessment of communication skills. In that first meeting, the coach talked about the importance of addressing emotion and taught skills, thus introducing intervention elements before the baseline assessment. In addition, “pre-intervention encounters” with hospitalists included the coach providing input after each patient. Hospitalists likely were already improving during the pre-intervention encounters, thus potentially inflating the baseline scores. Evaluating one or two encounters before an initial face-to-face meeting, and without any feedback (that is, a genuine pre-intervention evaluation), would likely result in a more dramatic contrast between pre- and post-intervention, but even in the context of a research study, we felt we needed to prioritize building rapport with clinicians and not make them uncomfortable with the intervention by “ambushing” them with an evaluation for which they had not been prepared.
Clinicians who received the coaching gave favorable ratings to this intervention. Clinicians are busy, and most have not received feedback on their communication since residency or fellowship. Understandably, some were somewhat anxious about being observed and coached. To address this concern, the coach reassured the clinicians that she would let them know what they were doing well, in addition to the things they could “tweak.” Anecdotally, clinicians responded exceedingly well when they were praised. This approach might represent somewhat of a culture shift because clinicians often expect to be told they are not meeting expectations and need to do better. Overwhelmingly, clinicians self-reported that the coaching was helpful in making their communication with patients more effective.
We did not find differences in clinician satisfaction over the course of this trial, in contrast to the differences we have noticed in our other work with clinicians, and in the outpatient clinic study we cited previously. One possible reason is that the coaching intervention delivered during the trial was considerably less intensive than in the previous study and in our usual work. We intentionally made it less intensive to fit into the busy clinicians’ work schedules. In the previous study, coaches shadowed clinicians multiple times for the whole afternoon and facilitated monthly all-staff meetings. The clinics in our previous study also were primary care clinics, which differ greatly from oncology clinics and hospital inpatient medicine. System-level factors might have more of an influence on clinician satisfaction than the coaching could address. For example, the coaching does not address, and therefore would not change, dissatisfaction related to patient load, challenges of the electronic health record, providing clinical care while monitoring and instructing trainees, and other demands made of these clinicians. Further, for oncologists, the intervention was more work than for the hospitalists because they had to remember to audio-record their encounters.
Strengths of this study include the randomized, controlled design; a large number of participating clinicians; inclusion of both inpatient and outpatient settings; inclusion of an objective measure of communication skills via direct observation or audio-recording of encounters; use of a widely-used, health system–wide tool for assessing patient satisfaction; and use of an experienced communications coach. This study also has limitations that should be considered. First, only intervention clinicians audio-recorded or were observed during their encounters. Thus, we cannot make comparisons with changes in objective communication skills among control clinicians. Past studies have shown that communication rarely improves without intervention; however, we cannot make this claim. Second, we relied on Press Ganey surveys for patient satisfaction. These surveys notoriously have a low response rate, which affected our ability to conduct inferential statistics despite the large number of clinicians in the trial. Surveying patients directly might have captured a more representative sample. And third, there is currently no standard training for communication coaching known to replicate the effects of our coach, which is an essential tool for dissemination and implementation of these findings.
In conclusion, this relatively low-intensity coaching intervention improved patient satisfaction and clinician communication. Clinicians found the coaching to be acceptable and helpful. Moving this work toward implementation requires a fully-powered trial that directly assesses patient satisfaction and other patient-centered outcomes and objectively assesses communication skills in a control group.
Acknowledgements: We also would like to thank these clinicians for their participation (listed in alphabetical order): James Abbruzzese, MD, Andrew Armstrong, MD, Joseph Brogan, MD, George Cheely Jr., MD, Saumil Chudgar, MD, Dana Clifton, MD, Margaret Deutsch, MD, Colby Feeney, MD, Stéphanie Gaillard, MD, David Gallagher, MD, Daniel George, MD, Aubrey Jolly Graham, MD, Brian Griffith, MD, Elizabeth Hankollari, MD, Michael Harrison, MD, Thomas Holland, MD, Aparna Kamath, MD, Gretchen Kimmick, MD, Joanna Kipnes, MD, Margot O’Neill, NP, David Mack, MD, David Ming, MD, Michael Morse, MD, Katherine Neal, MD, Cara O’Brien, MD, Christina Page, MSN, RN, Snehal Patel, MD, Rebecca Phillips, MSN, Richard Riedel, MD, Adia Ross, MD, April Salama, MD, Noppon Setji, MD, Suchita Shah, MD, Stephen Telloni, MD, Kristina Tourville, RN, Lisa Vann, MD, John Yeats, MD, Kelly Young, DNP, and Yousuf Zafar, MD.
List of Supplemental Digital Content