Interview with Dr. Ziad Obermeyer on how collaboration between doctors and computers will help improve medical care.
In the good old days, clinicians thought in groups; “rounding,” whether on the wards or in the radiology reading room, was a chance for colleagues to work together on problems too difficult for any single mind to solve.
Today, thinking looks very different: we do it alone, bathed in the blue light of computer screens.
Our knee-jerk reaction is to blame the computer, but the roots of this shift run far deeper. Medical thinking has become vastly more complex, mirroring changes in our patients, our health care system, and medical science. The complexity of medicine now exceeds the capacity of the human mind.
Computers, far from being the problem, are the solution. But using them to manage the complexity of 21st-century medicine will require fundamental changes in the way we think about thinking and in the structure of medical education and research.
It’s ironic that just when clinicians feel that there’s no time in their daily routines for thinking, the need for deep thinking is more urgent than ever. Medical knowledge is expanding rapidly, with a widening array of therapies and diagnostics fueled by advances in immunology, genetics, and systems biology. Patients are older, with more coexisting illnesses and more medications. They see more specialists and undergo more diagnostic testing, which leads to exponential accumulation of electronic health record (EHR) data. Every patient is now a “big data” challenge, with vast amounts of information on past trajectories and current states.
All this information strains our collective ability to think. Medical decision making has become maddeningly complex. Patients and clinicians want simple answers, but we know little about whom to refer for BRCA testing or whom to treat with PCSK9 inhibitors. Common processes that were once straightforward — ruling out pulmonary embolism or managing new atrial fibrillation — now require numerous decisions.
So, it’s not surprising that we get many of these decisions wrong. Most tests come back negative, yet misdiagnosis remains common.1 Patients seeking emergency care are often admitted to the hospital unnecessarily, yet many also die suddenly soon after being sent home.2 Overall, we provide far less benefit to our patients than we hope. These failures contribute to deep dissatisfaction and burnout among doctors and threaten the health care system’s financial sustainability.
If a root cause of our challenges is complexity, the solutions are unlikely to be simple. Asking doctors to work harder or get smarter won’t help. Calls to reduce “unnecessary” care fall flat: we all know how difficult it’s become to identify what care is necessary. Changing incentives is an appealing lever for policymakers, but that alone will not make decisions any easier: we can reward physicians for delivering less care, but the end result may simply be less care, not better care.
The first step toward a solution is acknowledging the profound mismatch between the human mind’s abilities and medicine’s complexity. Long ago, we realized that our inborn sensorium was inadequate for scrutinizing the body’s inner workings — hence, we developed microscopes, stethoscopes, electrocardiograms, and radiographs. Will our inborn cognition alone solve the mysteries of health and disease in a new century? The state of our health care system offers little reason for optimism.
But there is hope. The same computers that today torment us with never-ending checkboxes and forms will tomorrow be able to process and synthesize medical data in ways we could never do ourselves. Already, there are indications that data science can help us with critical problems.
Consider the challenge of reading electrocardiograms. Doctors look for a handful of features to diagnose ischemia or rhythm disturbances — but can we ever truly “read” the waveforms in a 10-second tracing, let alone the multiple-day recording of a Holter monitor? Algorithms, by contrast, can systematically analyze every heartbeat. There are early signs that such analyses can identify subtle microscopic variations linked to sudden cardiac death.3 If validated, such algorithms could help us identify and treat the tens of thousands of Americans who might otherwise drop dead unexpectedly in any given year. And they could guide basic research on the mechanisms of newly discovered predictors.

Click To Enlarge.
Algorithms have also been deployed for an analysis of massive amounts of EHR data whose results suggest that type 2 diabetes has three subtypes, each with its own biologic signature and disease trajectory.4 Knowing which type of patients we’re dealing with can help us deliver treatments to those who benefit most and may help us understand why some patients have complications and others don’t.
There is little doubt that algorithms will transform the thinking underlying medicine. The only question is whether this transformation will be driven by forces from within or outside the field. If medicine wishes to stay in control of its own future, physicians will not only have to embrace algorithms, they will also have to excel at developing and evaluating them, bringing machine-learning methods into the medical domain.
Machine learning has already spurred innovation in fields ranging from astrophysics to ecology. In these disciplines, the expert advice of computer scientists is sought when cutting-edge algorithms are needed for thorny problems, but experts in the field — astrophysicists or ecologists — set the research agenda and lead the day-to-day business of applying machine learning to relevant data.
In medicine, by contrast, clinical records are considered treasure troves of data for researchers from nonclinical disciplines. Physicians are not needed to enroll patients — so they’re consulted only occasionally, perhaps to suggest an interesting outcome to predict. They are far from the intellectual center of the work and rarely engage meaningfully in thinking about how algorithms are developed or what would happen if they were applied clinically.
But ignoring clinical thinking is dangerous. Imagine a highly accurate algorithm that uses EHR data to predict which emergency department patients are at high risk for stroke. It would learn to diagnose stroke by churning through large sets of routinely collected data. Critically, all these data are the product of human decisions: a patient’s decision to seek care, a doctor’s decision to order a test, a diagnostician’s decision to call the condition a stroke. Thus, rather than predicting the biologic phenomenon of cerebral ischemia, the algorithm would predict the chain of human decisions leading to the coding of stroke.
Algorithms that learn from human decisions will also learn human mistakes, such as overtesting and overdiagnosis, failing to notice people who lack access to care, undertesting those who cannot pay, and mirroring race or gender biases. Ignoring these facts will result in automating and even magnifying problems in our current health system.5 Noticing and undoing these problems requires a deep familiarity with clinical decisions and the data they produce — a reality that highlights the importance of viewing algorithms as thinking partners, rather than replacements, for doctors.
Ultimately, machine learning in medicine will be a team sport, like medicine itself. But the team will need some new players: clinicians trained in statistics and computer science, who can contribute meaningfully to algorithm development and evaluation. Today’s medical education system is ill prepared to meet these needs. Undergraduate premedical requirements are absurdly outdated. Medical education does little to train doctors in the data science, statistics, or behavioral science required to develop, evaluate, and apply algorithms in clinical practice.
The integration of data science and medicine is not as far away as it may seem: cell biology and genetics, once also foreign to medicine, are now at the core of medical research, and medical education has made all doctors into informed consumers of these fields. Similar efforts in data science are urgently needed. If we lay the groundwork today, 21st-century clinicians can have the tools they need to process data, make decisions, and master the complexity of 21st-century patients.
SOURCE INFORMATION
From Brigham and Women’s Hospital and Harvard Medical School, Boston (Z.O., T.H.L.), and Press Ganey, Wakefield (T.H.L.) — both in Massachusetts.
1. Institute of Medicine. Improving diagnosis in health care. Washington, DC: National Academies Press, 2015.
2. Obermeyer Z, Cohn B, Wilson M, Jena AB, Cutler DM. Early death after discharge from emergency departments: analysis of national US insurance claims data. BMJ 2017;356:j239-j239. CrossRef | Web of Science | Medline
3. Syed Z, Stultz CM, Scirica BM, Guttag JV. Computationally generated cardiac biomarkers for risk stratification after acute coronary syndrome. Sci Transl Med 2011;3:102ra95-102ra95. CrossRef | Web of Science | Medline
4. Li L, Cheng W-Y, Glicksberg BS, et al. Identification of type 2 diabetes subgroups through topological analysis of patient similarity. Sci Transl Med 2015;7:311ra174-311ra174. CrossRef | Web of Science | Medline
5. Mullainathan S, Obermeyer Z. Does machine learning automate moral hazard and error? Am Econ Rev 2017;107:476-480. CrossRef | Web of Science | Medline
This Perspective article originally appeared in The New England Journal of Medicine.
Your email address will not be published. Required fields are marked *
Note: This is a moderated forum and all comments are reviewed before posting. By clicking on the "Post Comment" button below, you agree to abide by the NEJM Catalyst Terms of Use. We reserve the right to not post every comment; including those that are submitted anonymously or that are potentially illegal, vulgar, libelous, or commercial in nature.
Federico Cabitza
The Limits of Mind: Extended by Computers, or just distanced from sight?
In their Perspective [1], Obermeyer and Lee claim that computers, ”far from being the problem [of the increasing complexity of contemporary medicine], are the solution” and suggest that, as the inadequacy of “our inborn sensorium” spurred the development of “stethoscopes, electrocardiograms, and radiographs”, likewise the inadequacy of our “inborn cognition” motivates an analogous augmentation by computers.
However, the mentioned sensorial augmentation amplifies subtle clinical signs offering them at the physicians’ interpretation, while computers would augment cognition in terms of mere textual categories and numerical data, thus often shortcutting intuition, dispelling uncertainty [2] from clinical reasoning and worse yet potentially biasing interpretation [3].
Understating the irreducible gap between the discreteness of data and the continuous (and partly ineffable) experience of illness in physicians regards the “demise of context” we highlighted [4] when physicians overrely on computer outputs.
While the computers’ potential for pattern recognition in diagnostic imaging is indisputable, the complexity of more clinical applications has so far been irksome to master [5].
This suggests prudence before entrusting the “future of medicine” to a wider digitization, which can entail unintended bottlenecks.
Federico Cabitza, PhD; Camilla Alderighi, MD; Raffaele Rasoini, MD;
[1] Obermeyer Z, Lee TH. Lost in Thought — The Limits of the Human Mind and the Future of Medicine. NEJM 2017; 377:1209-1211
[2] Simpkin AL, Schwartzstein RM. Tolerating uncertainty—the next medical revolution? NEJM 2016; 375(18): 1713-1715.
[3] Goddard K, Roudsari A, Wyatt JC. Automation bias: a systematic review of frequency, effect mediators, and mitigators. JAMIA. 2011;19(1):121-7.
[4] Cabitza F, Rasoini R, Gensini GF. Unintended Consequences of Machine Learning in Medicine. JAMA 2017; 318(6): 517–518
[5] Ross C, Swetlitz I. IBM Pitched Its Watson Supercomputer as a Revolution in Cancer Care. It's Nowhere Close. Scientific American, 6 september 2017
October 31, 2017 at 3:58 am