Sometimes a name does matter. For example, Chilean sea bass has been one of the most popular seafood items on menus since the 1990s, when it seemed to have evolved out of the blue. In fact, the species is millions of years old, but no one would touch it under its former name of Patagonian toothfish. Similarly, you’d be excused for thinking that no one considered value in health care until 2006, when Michael Porter and Elizabeth Teisberg first promoted the term “value” in that context in their book Redefining Health Care. (Indeed the now-ubiquitous phrase “value proposition” was coined in the business world only in 1988.)
But the concept of value, defined as health outcomes relative to costs, has been around for decades, under a different name: “cost-effectiveness.” Traditional medical research compares two treatments in terms of comparative effectiveness – for example, whether the treatments differ in outcomes such as the number of deaths prevented, or successful cures. A common outcome in such studies is the quality-adjusted life year (QALY). This accounts for the fact that success isn’t really a binary variable, alive or dead. It makes a difference being alive and healthy vs. alive and disabled. Using sophisticated methods, patients can rank the relative quality of, for example, a year of life with no limitations, a year of life with moderate pain, and a year of life with limited mobility. The efficacy of a treatment can then be expressed in terms of the number of QALYs. Treatments that result in more quality-adjusted life years – i.e., better outcomes – are more effective. But if a treatment produces a better outcome, yet at a higher cost, how do you judge whether it is worthwhile? That’s where cost-effectiveness comes in. The difference between treatments is not expressed in QALYs alone, but in terms of the cost per QALY. It’s not just outcomes, but outcomes relative to cost. In modern terms, we’d call this value.
The term cost-effectiveness (which I will abbreviate as CE) became somewhat toxic when the results of CE analyses called into to question commonly-used but seemingly valueless treatments, and produced recommendations to avoid them. To some conspiracy-minded folks this sounded suspiciously like rationing. In addition, in keeping with the adage that one person’s trash is another person’s treasure, vested interests were threatened. The demise of CE can plausibly traced to 1994, when the Agency for Health Care Research and Policy (AHCPR) – created during the first Bush administration for the purpose of creating evidence-based clinical practice guidelines – released its guidelines on management for low back pain. Citing the lack of evidence to support the cost-effectiveness of surgical treatment, the guideline recommended non-surgical approaches. Spine surgeons went nuts. They successfully lobbied Congress to slash AHCPR’s funding and to rein in their mandate, changing the name to Agency for Healthcare Research and Quality (AHRQ) – no more policy!
Fast forward to 2010, when a new Congress was drafting the Affordable Care Act. While desiring to promote evidence-based practice to reduce waste (like George H.W. Bush 20 years earlier), they were wary of the hysteria suggesting that death panels were on the horizon. In an ultimately unsuccessful effort to appease these critics, the law created PCORI – the Patient-Centered Outcomes Research Institute – to provide patients and the public “information they can use to make decisions that reflect their desired health outcomes,” but explicitly forbade it from doing cost-effectiveness analyses.
Aside from the wonderful irony of the free-market proponents who were espousing consumerism and “value” at the same time prohibiting value-based analysis, a recent article in Health Affairs demonstrates some of the consequences of this decision. (Disclosure – Dr. Glick, the senior author, was the one who taught me CE when I was getting my master’s in clinical epidemiology in the early 90s.) The purpose of the study was to see if the ban on PCORI-supported CE matters. Are there important differences in recommendations based on an analysis of simple effectiveness vs. cost-effectiveness? The authors reviewed over 2000 CE previously-published analyses. The good news was that in 81% of the cases, using either simple effectiveness or taking cost into account, you’d reach the same conclusion.* One could conclude, then, that the congressional embargo on CE doesn’t matter that much, since using that method would only change the recommendation in 1 case out of 5.
However, the authors estimated the economic impact of recommending low-value care based on the 19% of analyses where the treatment that would be recommended based on simple effectiveness turns out not to be cost-effective. The overall cost of such low-value care is $412 billion annually, or 14% of overall health spending. That’s a lot of money.
The champions of value have, therefore, subverted the ability to deliver on value because of their aversion to cost-effectiveness. Which is silly. After all, tilefish tastes the same as sea bass. If we want to find value and reduce waste in health care we need to look for it. Under whatever name.
*(For the detail minded among you, it depends a little on how you define how much you are willing to spend for the given outcome. Traditionally, CE analysis uses a cut-off of $100,000 per QALY gained. If a treatment costs more than that, it’s not considered worthwhile. Some experts have recommended thresholds that are either higher – you’d recommend a treatment even if it cost as much as $200,000 per QALY – while others have used cut-offs as low as $50,000. In this study, changing the threshold leads to an agreement rate that ranges from as low as 68% to as high as 89%.)