Michelle Jester and Erin Benson both want to help identify patients whose health could be at risk due to various social, economic, and environmental factors. Their approaches, however, diverge sharply.
Jester is Research Manager at the National Association of Community Health Centers, where she oversees rollout of a risk tool based on asking patients a series of 16 core and 4 optional questions. These cover areas such as income, social support, and incarceration history. In contrast, Benson, Director of Market Planning at LexisNexis Health Care, employs a tool that gathers information about these same areas and more without asking any potentially awkward questions at all.
Instead, LexisNexis analyzes credit reports, marriage and divorce records, criminal history, voting frequency, shopping patterns, and other lifestyle information that it has available on 280 million Americans. Based on the relationships among 442 separate attributes, the company then calculates an individual’s health risk score.
That information is procured from sources that require no individual consent.
“Everyone wins,” says Benson in a webinar for health plans, medical groups, and other potential clients, “because . . . health improves.”
Predicting who is about to suffer serious health issues and helping doctors, medical groups, and health plans intervene to prevent them is the promise of a rapidly emerging field of big data analytics focused on social determinants of health (SDOH). Attention to SDOH, defined by the World Health Organization as “the conditions in which people are born, grown, live, work, and age,” has intensified as its importance has been starkly quantified: medical care accounts for just 10% of the factors that help avert premature death, while social and environmental factors are at 20%, genetics at 30%, and individual behavior change at 40%.
The Institute of Medicine recommended in 2014 that doctors survey patients about a defined set of SDOH indicators and document the results. That advice came at a time when large payers such as Medicare had begun switching from fee-for-service to value-based payment, which rewards keeping a defined population healthy.
Nonetheless, despite some bright spots, widespread adoption has been slow. Physician efforts to address SDOH remain “limited and, for the most part, ineffective,” concludes a 2016 study in the Annals of Family Medicine. Similarly, a recent review in Health Affairs reports “little consensus” even on which measures can or should be captured.
Enter the number crunchers, ready to apply the same kind of “psychographic” analysis long used in other fields to identify and influence potential customers. Among their promises are to “enhance value-based care within the patient-driven experience” (database marketing company Acxiom Corp.), improve “specific health outcomes” (LexisNexis), and circumvent “the entire timeline of a disease.”
That last promise comes from Cambridge Analytica, which recently declared bankruptcy due to the controversy over how it applied big data analytics to changing voter behavior during the 2016 presidential election.
Health technology start-ups have also seen the opportunity. Lumeris, partly funded by a venture capitalist famous for his early backing of Google and Amazon, boasts Denis Cortese, MD, former President and CEO of the Mayo Clinic, on its board. Clarify Health Solutions, founded by a physician, includes on its high-profile board Jack Cochran, MD, former CEO of the Permanente Foundation.
Different firms vary in the raw data they use, including the extent of integrating SDOH information with electronic clinical or claims data, and what limitations on information use they impose upon clients. All stress that they want to use machine learning techniques to make otherwise-hidden ills visible, head off hospitalizations, and provide holistic insights that will suggest effective ways to boost wellness behaviors. Nonetheless, important questions remain.
Who Decides How to Use Big Data?
At the 2018 Health Datapalooza conference, Deven McGraw, JD, formerly a top privacy official at the U.S. Department of Health and Human Services, raised a crucial concern about privacy issues around health data. “How do we make sure the uses of this are for the good, and who gets to decide that?” she says. “Is it enough to say that companies should do this internally?
Right now, that seems to be the case. Those buying big data services prefer vaguely high-minded references to “advanced analytics solutions” or “socio-demographic data.” A medical group, health system, or health plan won’t typically tell patients what information produced the risk score that triggered outreach. As a physician leader noted in a recent interview, “to say we have all this information about you, about your housing or transportation . . . that’s not likely to go over well.”
Jester, promoting adoption of her group’s Protocol for Responding to and Assessing Patients’ Assets, Risks and Experiences (PRAPARE) questionnaire, acknowledges she was unaware of the data analytics companies’ efforts. Several researchers prominent in the field who were contacted for this article profess a similar lack of knowledge. Indeed, even with the intense attention given to the political work of Cambridge Analytica, its use of similar psychographic techniques in health care appears to have gone completely unnoticed.
Nancy Adler, PhD, Professor of Medical Psychology at the University of California at San Francisco, and Co-Chair of the committee that produced the 2014 IOM report, says in an interview that she, too, was unaware of the SDOH information being collected and analyzed by the data merchants. “There are clearly ethical concerns about privacy and how the information will be used,” she notes, adding, “It could create a backlash if not done with patient consent.”
The more prominent data merchant companies say they’re acutely aware of the need for privacy and security safeguards, and their information, when put to use by clients in a clinical context, may be subject to HIPAA safeguards. They point to satisfied customers who’ve improved outcomes and income.
What is glaringly absent, however, is any examination in the peer-reviewed literature of the validity of this kind of approach either by itself, with clinical and claims data, or even in comparison with surveying patients directly with a tool such as PRAPARE. Algorithms, like people, have biases and limitations.
In addition, the data itself may have limits that some companies fail to adjust for, or the data may be used by some firms in ways the market leaders eschew. For example, the founder of one small company that surveys patients as the basis for its predictive analytics also obtains their credit reports as back-up information. “It can tell the doctor if the patient is a liar,” he explains.
“We need some safeguards,” says Arvin Garg, MD, Associate Professor of Pediatrics at Boston University School of Medicine, who’s published widely on SDOH screening. “Who can provide this and what are the needs? That’s probably the first thing.”
Next comes transparency. Even if the information is useful, says Garg, “it would be great to acknowledge to patients the method by which we collect this data.”