Viewpoint
Aug 2013

Defining Quality, Disseminating Evidence, and Enforcing Guidelines for Cancer Treatment

Thomas W. LeBlanc, MD, MA and Amy P. Abernethy, MD, PhD
Virtual Mentor. 2013;15(8):713-717. doi: 10.1001/virtualmentor.2013.15.8.oped1-1308.

 

Dr. Robert Bristow and colleagues recently reported on the quality of cancer care provided to a large group of patients in California. In reviewing more than 13,000 cases from 1999 through 2006, they found that only about 37 percent of women with ovarian cancer had received the recommended standard treatment for their disease, as laid out in the National Comprehensive Cancer Network (NCCN) guidelines at the time [1]. This staggering finding is unfortunately just the latest piece of evidence in the evolving story about tensions between quality and cost in U.S. health care. Though we spend more of our gross domestic product on health care than any other country in the world, many of our outcomes are no better than those of other industrialized nations [2]. Our health care system has the uncanny ability to simultaneously provide the most expensive, unproven care to many and fail to consistently provide proven, guideline-recommended interventions. Or so it seems.

These recent findings are of course gut-wrenching and seemingly unfathomable. Who would deny appropriate treatment to so many women with a life-threatening disease? The underlying roots of the problem, however, are far from clear; its causes are complex and insidious, stemming from a tangled web of issues implicating physicians, payers, systems, and patients—all well-intentioned, yet sometimes contributing to the provision of inadequate care. Key contributors that we will highlight here include: (1) difficulty getting new evidence incorporated into clinical consciousness, (2) disagreements about definitions and measures of quality and problems with guidelines (conflict with other guidelines, failure to reflect the latest evidence and innovations, and so on), and (3) the challenges of enforcing adherence to guidelines.

Defining Quality and Disseminating Information

While reasonable people can agree that no patient should receive substandard care, we face enormous struggles in changing physician behavior and incorporating new knowledge into clinical practice [3]. Consider the following example. A general gynecologist finds an unexpected pelvic mass while doing a routine hysterectomy. She sees just a handful of ovarian cancer cases each year and is not trained in the intricacies and latest evidence on its surgical management. The patient is already in the operating room at a small community surgery center with no local gynecologic oncologist, so she proceeds to remove the mass, unfortunately resulting in its rupture and spillage of contents into the pelvis. No lymph nodes are sampled, the omentum is not removed, and no other biopsies are obtained. Thus, the patient leaves the operating room without complete cancer staging and with a suboptimal resection that is likely to worsen her prognosis and necessitate another surgery.

The caring decision to quickly remove a newly found pelvic mass thus becomes an instance of suboptimal (if not negligent) care, despite good intentions. Should the gynecologist have stopped the operation, closed the incisions, and referred the patient to a specialist as an outpatient, resulting in a delay prior to another surgery and perhaps much distress? Reasonable people might disagree on the definition of quality care in a case like this, and agreed-upon definitions might conflict with reasonable views on how to help this particular patient when an expert is not available.

Such is the challenge of actually defining “quality.” Experts often disagree on the standard by which to measure quality care. As we increasingly focus on value-based care and more consciously consider costs, however, we must have some sort of yardstick by which to measure the care we provide. Unfortunately, these measures can be quite imperfect; sometimes we choose the wrong measure or fail to recognize downstream consequences of our choices. For example, current pneumonia treatment guidelines require the provision of antibiotics within 4 hours of presentation to the emergency department. The goal, of course, is timely provision of appropriate care. One of many unintended consequences, however, may be the overtreatment of less serious conditions (viral bronchitis, for example, which generally should not be treated with antibiotics) in an attempt to avoid a possible penalty for missing the 4-hour window for a patient who is later found to actually have pneumonia. Downstream, this could lead to antibiotic resistance, along with unnecessary drug costs. Outcomes like this reflect Goodhart’s famous axiom from the financial world: “When a measure becomes a target, it ceases to be a good measure” [4].

As this example demonstrates, appropriate referral to experts is certainly an important part of the solution to closing the quality chasm. Indeed, the “clinician comfort level” problem described here is precisely why gynecologic-oncology fellowships exist, why subspecialty board exams are important, and why specialists have their own conferences. Would those things really have helped in this case, though, since a gynecologic oncologist was not available to join in on the surgery? Even when experts abound, the challenge of getting evidence incorporated into practice remains; this process tends to be slow, inefficient, and inconsistent, even among experts themselves.

Continuing education activities are helpful but insufficient to keep physicians up to date with fast-paced changes [5, 6]. This is especially true in oncology, given how complex and diverse our options have become and the pace with which new therapies are being released to the market. Furthermore, many academic centers that pride themselves on staying “ahead of the curve,” provide promising therapies before there is truly mature data about their efficacy or appropriateness as standards of care or in conjunction with proven therapies. (This does not mean it is wrong to provide new treatments, which may in some cases be better for a particular patient.) Where guidelines do exist, the recommendations of one frequently conflict with those of another, and many oncologists disagree with the specifics. Furthermore, available guidelines may change annually due to the speed of evidence development. All this conspires to make quality monitoring in oncology a rather precarious endeavor. Our evidence base is quite imperfect; we often really do not know which treatment is best.

Guideline Enforcement

If measuring quality is a tricky business, so too is enforcing adherence to guidelines and recommendations. Pay-for-performance initiatives and penalties for complications and errors are emerging strategies to enforce quality guidelines. Such initiatives have yet to emerge meaningfully in oncology, however. One promising development is the “5 things campaign,” meant to encourage reflection about high-value, cost-conscious care by highlighting five specific costly, unproven treatments to avoid; unfortunately the campaign lacks any enforcing “teeth” [7].

To promote adherence to proven, standard therapies, on the other hand, rather than discourage ineffective ones, is a much different and more challenging task. One potentially promising strategy to promote adherence is the use of so-called “care pathways.” These pathways are effectively “roadmaps” that seek to standardize cancer treatment on the basis of some reasonably agreed-upon set of evidence or guidelines, within the confines of a particular center or group of patients. Whether this strategy will catch on, eventually to the point that payers provide actionable incentives for sluggish systems to adopt it, remains unclear. From a behavioral economics standpoint, however, it is plausible and intuitively desirable. Evidence suggests that “defaults” are quite powerful in their impact on people’s eventual choices—several countries have capitalized on this phenomenon to increase the rate of organ donation, enacting policies of presumed consent by default [8]—and care pathways would effectively standardize some definition of “quality cancer care” as the default for all patients from which clinicians would opt out if they did not apply in a particular case. Making guideline-based care the “default” option would very likely increase quality, at least by this measure.

Care pathways only scratch the surface of what needs to be done, however. “Learning health care systems,” such as the American Society of Clinical Oncology’s CancerLinQ, have the potential to operationalize quality measures more effectively, bringing on-the-fly quality monitoring and feedback to individual clinicians and practices [9, 10]. Learning systems and electronically available data can also facilitate clinical decision support, tying specific details about the patient (e.g., age, disease, preferences) with clinical options to present the best possible approach in real-time at the point of care. This recursive provision of feedback will not only enhance adherence to agreed-upon guidelines, imperfect as they may be, but also simultaneously help us study and develop the quality measures of the future by making data collection and analysis more of an active part of routine clinical care. Even the clinical practice guidelines or care pathways can “learn” in such a system—being iteratively updated as outcomes data highlight optimal choices at the decision nodes in the pathway. Such is the future. CancerLinQ brings us a step closer to the reality.

Conclusion

In the end, clinicians are generally good people, trying to do a good job, working to help patients who face devastating diagnoses. Despite this, we still sometimes fail to provide optimal care. How can we improve the status quo? We must be thoughtful about how we proceed. This is a time of major growing pains, as medical practice is changing from a more individual, experience-based phenomenon to a more systematized, guideline-based, value-driven, regulated provision of care by teams. As we focus increasingly on quality and value, there will be more attention on guideline-based care, and this is probably good for patients.

“Care pathways” appear to be a promising way to make care that is consistent with the latest high-quality evidence a “default” option. However, we must be careful not to treat pathways as the be-all and end-all of medical practice; medicine is complex, and not every patient should get the same treatment. Clinicians must retain the autonomy to deviate from these pathways when appropriate. We must demand that pathways be personalized, combined with a patient’s unique information; we must tailor the recommendations to personal circumstances and ensure that pathways be continuously evaluated and updated by aggregating information. Such is the difficulty of standardizing cancer care, but we owe it to our patients to do better than 37 percent.

References

  1. Grady D. Widespread flaws found in ovarian cancer treatment. New York Times. March 11, 2013. http://www.nytimes.com/2013/03/12/health/ovarian-cancer-study-finds-widespread-flaws-in-treatment.html?pagewanted=all. Accessed July 3, 2013.

  2. Reinhardt UE, Hussey PS, Anderson GF. U.S. health care spending in an international context. Health Aff (Millwood). 2004;23(3):10-25.
  3. Eisenberg JM, Williams SV. Cost containment and changing physicians’ practice behavior. JAMA. 1981;246(19):2195-2201.
  4. Goodhart CAE. Problems of monetary management: the U.K. experience. Papers in Monetary Economics, volume 1. Sydney: Reserve Bank of Australia; 1975.

  5. Casebeer L, Kristofco RE, Strasser S, et al. Standardizing evaluation of on-line continuing medical education: Physician knowledge, attitudes, and reflection on practice. J Contin Educ Health Prof. 2004;24(2):68-75.
  6. Davis D, O’Brien MA, Freemantle N, Wolf FM, Mazmanian P, Taylor-Vaisey A. Impact of formal continuing medical education: do conferences, workshops, rounds, and other traditional continuing education activities change physician behavior or health care outcomes?. JAMA. 1999;282(9):867-874.
  7. Schnipper LE, Smith TJ, Raghavan D, et al. American Society of Clinical Oncology identifies five key opportunities to improve care and reduce costs: the top five list for oncology. J Clin Oncol. 2012;30(14):1715-1724.
  8. Abadie A, Gay S. Impact of presumed consent legislation on cadaveric organ donation: A cross-country study. J Health Econ. 2006;25(4):599-620.
  9. Eastman P. New Institute of Medicine report urges transformation of US health care into continuous learning system. Oncology Times. September 8, 2012. http://journals.lww.com/oncology-times/blog/onlinefirst/pages/post.aspx?PostID=514. Accessed July 3, 2013.

  10. Eastman P. Online First: ASCO’s continuous learning prototype passes proof-of-principle test. Oncology Times. April 2, 2013. http://journals.lww.com/oncology-times/blog/onlinefirst/pages/post.aspx?PostID=714. Accessed July 3, 2013.

Citation

Virtual Mentor. 2013;15(8):713-717.

DOI

10.1001/virtualmentor.2013.15.8.oped1-1308.

The viewpoints expressed in this article are those of the author(s) and do not necessarily reflect the views and policies of the AMA.