In these times of heated rhetoric about what various health care reforms can and cannot accomplish, both hopeful and doomsday stories abound. Proponents and opponents of reforms often claim that their views are grounded in evidence, but it’s not always clear what they mean by that — particularly given the wide range of often incompatible views. Voters, physicians, and policymakers are left to wade through a jumble of anecdotes, aspirations, associations, and well-designed studies as they try to evaluate policy alternatives. Having a clear framework for characterizing what is, and isn’t, evidence-based health policy (EBHP) is a prerequisite for a rational approach to making policy choices, and it may even help focus the debate on the most promising approaches.
EBHP, we believe, has three essential characteristics (see table). First, policies need to be well-specified; a slogan is not sufficient. For example, “expand Medicaid” isn’t a policy. “Expand existing Medicaid benefits to cover all adults below the poverty line” is closer — but, of course, moving to a specific, implementable program requires vastly more detail. “Target population health” doesn’t qualify as a policy, let alone EBHP, because myriad policies fall under the population health banner, including influenza vaccination, smoking cessation, medication adherence, improving diets, increasing diabetes screening, addressing transportation barriers, and coordinating care. Slogans like “population health,” “single payer,” or “malpractice reform” may be an effective way to signify a political position or rally support (after all, who’s against population health?), but in avoiding specificity, they sidestep the hard work of assessing the relative effectiveness and implementation details of the policies included under their umbrella.
Second, implementing EBHP requires us to distinguish between policies and goals. This distinction is important in part because different people may have different goals for a particular policy. Consider the policy of implementing financial incentives for physicians to coordinate care. The evidence that such incentives would reduce health care spending (one potential goal) is quite weak, whereas the evidence that it might improve health outcomes (a different goal) is stronger.1 Claims that care coordination “doesn’t work” because it doesn’t save money miss the point that it may achieve other goals. Conversely, different policies may vary in their effectiveness at achieving a particular goal. If the goal is to reduce spending, then promoting competition or rate regulation may be more effective than care coordination.

Click To Enlarge.
Similarly, consider the policy of raising income limits for Medicaid eligibility. The evidence suggests that this policy is likely to achieve the goal of expanding access to care. On the other hand, evidence from a randomized trial indicates that it’s not likely to achieve the goal of reducing emergency department (ED) use (and even the broader evidence is mixed).2,3 If one favors expanding Medicaid to achieve the normative goal of redistribution from rich to poor and healthy to sick, it is tempting to suggest that expansion would also save money by reducing the use of expensive ED visits. But such claims are at best disingenuous and at worst counterproductive: if the evidence shows that Medicaid doesn’t achieve the stated objective of reducing ED use, that undermines the case for expansion even if the policy might achieve the unstated goal of redistribution. Being clear about goals is the only way to evaluate a policy’s effectiveness and the implied trade-offs between competing goals. These stylized examples are meant to illustrate the key components of the EBHP approach; evidence on each of these policies (and their many variants) is clearly much more nuanced than we can outline here.
Third, EBHP requires evidence of the magnitude of the effects of the policy, and obtaining such evidence is an inherently empirical endeavor. Introspection and theory are terrible ways to evaluate policy. In some instances, we have clear conceptual models that suggest the direction of the effect a policy is likely to have, but these models never tell us how big the effect is likely to be. For example, economic theory says that, all else being equal, when copayments or deductibles are higher, patients use less care (we’re pretty sure that demand slopes down), but this theory doesn’t tell us by how much. And often even the direction of the effect is unclear without empirical research, with different effects potentially going in opposite directions.
What makes for “rigorous enough evidence”? Professional medical societies have developed gauges of the strength of evidence to support clinical guidelines, and we should demand nothing less for health policy. No study is perfect, and important policy questions are rarely answered definitively by any one study. Nor does pointing to a large literature with similar results prove a point if those studies share a common weakness such as an inability to control for confounders. There is a crucial distinction between finding an association between a policy and an outcome (Do people who receive more preventive care spend less on health care? Often yes) and a causal connection (Does delivering more preventive care reduce health care spending? Overall, we think probably not).
There is also a key difference between “no evidence of effect” and “evidence of no effect.” The first is consistent with wide confidence intervals that include zero as well as some meaningful effects, whereas the latter refers to a precisely estimated zero that can rule out effects of meaningful magnitude. These nuances are often lost when “evidence” is deployed in policy debates.
The effect of a policy, of course, also depends on the design and implementation details and the program particulars (Medicaid varies from state to state, for example, and the effect of expansions to different populations may vary) — and evidence needs to speak to those particulars. It is also important to consider the full range of a policy’s effects — its costs and benefits, and how each of these evolves over time.4 An impartial assessment of the budgetary costs like those provided by the Congressional Budget Office (CBO) is a crucial but incomplete part of the picture because of the CBO’s statutory emphasis on the federal budget rather than lives or well-being.
Making health policy on the basis of evidence will always be a fraught and uncertain endeavor, and each component we outline here comes with challenges. For starters, we acknowledge that fully specifying a policy requires the kind of legislative and regulatory detail that is impractical for a high-level policy debate, but often the “policies” being discussed are so ill specified that it’s impossible to bring any evidence to bear.
In addition, just as the distinction between policies and goals is often muddied, interpretations of the evidence are often flavored by the implicit goals of the analyst.5 A given body of evidence can be used to support very different policy positions (depending on what one’s goals are — for example, how one weighs costs to taxpayers versus redistribution of health care resources), but different goals shouldn’t drive different interpretations of the evidence base.
Finally, even a rich body of evidence cannot guarantee that a policy will achieve its goals, and waiting for that level of certainty would paralyze the policy process. In health policy — as in any other realm — it is often necessary to act on the basis of the best evidence on hand, even when that evidence is not strong. Doing so requires weighing the costs of acting when you shouldn’t against those of not acting when you should — again, a matter of policy priorities.
Just because something sounds true doesn’t mean that it is, and magical thinking won’t improve our health care system. EBHP helps separate facts from aspiration. But as important as evidence is to good policy choices, it can’t tell us what our goals should be — that’s a normative question of values and priorities. Better policy requires being both honest about our goals and clear-eyed about the evidence.
SOURCE INFORMATION
From the University of Chicago, Chicago (K.B.); and the National Bureau of Economic Research (K.B., A.C.) and Harvard University (A.C.) — both in Cambridge, MA.
1. McWilliams JM. Cost containment and the tale of care coordination. N Engl J Med 2016;375:2218-2220. Free Full Text | Web of Science | Medline
2. Taubman SL, Allen HL, Wright BJ, Baicker K, Finkelstein AN. Medicaid increases emergency-department use: evidence from Oregon’s Health Insurance Experiment. Science 2014;343:263-268.
CrossRef | Web of Science | Medline
3. Sommers BD, Simon K. Health insurance and emergency department use — a complex relationship. N Engl J Med 2017;376:1708-1711. Full Text | Web of Science | Medline
4. Asch DA, Pauly MV, Muller RW. Asymmetric thinking about return on investment. N Engl J Med 2016;374:606-608. Free Full Text | Web of Science | Medline
5. Ioannidis JP. Evidence-based medicine has been hijacked: a report to David Sackett. J Clin Epidemiol 2016;73:82-86. CrossRef | Web of Science | Medline
This Perspective article originally appeared in The New England Journal of Medicine.
Your email address will not be published. Required fields are marked *
Note: This is a moderated forum and all comments are reviewed before posting. By clicking on the "Post Comment" button below, you agree to abide by the NEJM Catalyst Terms of Use. We reserve the right to not post every comment; including those that are submitted anonymously or that are potentially illegal, vulgar, libelous, or commercial in nature.