From the Editor
Jan 2013

Evidence-Based Medicine: A Science of Uncertainty and an Art of Probability

Matthew Rysavy
Virtual Mentor. 2013;15(1):4-8. doi: 10.1001/virtualmentor.2013.15.1.fred1-1301.

 

A science of uncertainty and an art of probability [1]—that is how William Osler portrayed medicine as he practiced it at the turn of the last century, but he may as well have been describing the current era of “evidence-based” medicine.

The moniker “evidence-based” made its debut in the early 1990s. As Gordon Guyatt, a physician at McMaster University, first described the term:

Clinicians were formerly taught to look to authority (whether a textbook, an expert lecturer, or a local senior physician) to resolve issues of patient management. Evidence-based medicine uses additional strategies, including quickly tracking down publications of studies that are directly relevant to the clinical problem, critically appraising these studies, and applying the results of the best studies to the clinical problem at hand [2].

The rest is history. In two decades, the skills laid out by Guyatt have become an integral component of medical practice and training. Today, nearly all U.S. medical schools report teaching evidence-based medicine as part of a required course [3], and the Accreditation Council for Graduate Medical Education (ACGME) has incorporated EBM into U.S. residency training requirements [4].

During this time, the methods of EBM have evolved. Guyatt’s first article, for example, used the term “microcomputer” and notes that the cost of retrieving a few citations from MEDLINE was $0.79. Today, the National Library of Medicine makes MEDLINE citations of published research freely available through PubMed; research abstracts in many journals have been restructured for efficient appraisal; and numerous secondary resources, such as ACP Journal Club and Dynamed, summarize and review original research for clinical relevance [5].

The ideas underlying EBM have evolved, too. For instance, notions about shared decision making have been refined. Recent discussions about EBM particularize the importance of “decision aids” and other means by which patients participate in their own treatment [6, 7]. A common current definition of EBM is “the integration of best research evidence with clinical expertise and patient values” [8]—a description that seems not so distant from Osler’s science and art.

However, EBM is not old hat [9]. Systematic research evidence is more abundant and accessible than ever before, and EBM provides an original framework for integrating the results of this research into clinical practice [10]. It also proposes new methods to assess how research evidence should be applied in clinical practice, focusing on transparency and explicitness in its interpretation and role in making recommendations [11].

The rationale for EBM seems obvious: if results of clinically relevant research may be readily available and may benefit patients, we should consider such research in making medical decisions [12].

Yet, the practice of EBM is not always so straightforward. The articles in this issue of Virtual Mentor illustrate how implementing EBM in clinical practice, policy, and education can be complicated. They shed light on many of EBM’s evolving strengths, but also bring into focus the contours and boundaries of this new tool of modern medicine.

Many aspects of EBM concern values in addition to facts. They are intimately linked to customary considerations of medical ethics, including autonomy, justice, beneficence and nonmaleficence, and they raise important questions regarding these principles.

Autonomy: What should be the patient’s role in interpreting and applying research evidence in clinical decisions? Moreover, if the way in which evidence is conveyed to patients can alter their decisions [13], then how should research evidence be communicated?

In this issue, a thoughtful commentary by Lauris Kaldjian and Paul Christine addresses the complex and interrelated decisions that physicians face in discussing research evidence with patients. Valerie Reyna and Evan Wilhelms describe what has been learned from their research and that of others about effective strategies for communicating risks and benefits.

Thomas LeBlanc reflects on his experiences as an oncologist to address a related question: Can providing research evidence to a patient be harmful?Reprinted alongside his contribution is an essay by the late Stephen Jay Gould about making sense of the statistics and probabilities he encountered in his own struggle with cancer.

Justice: Who should have the authority to prioritize research questions and funding? Whose interests should be considered in the interpretation and dissemination of research evidence?

Chetan Huded, Jill Rosno, and Vinay Prasad provide an excellent summary of John Ioannidis’s essay “Why Most Published Research Findings are False,” one of the most cited articles in the history of the journal PLoS Medicine [14]. They examine the biases of medicine’s “evidence base” and contribute suggestions to rectify these shortcomings based upon insights from their own research into “medical reversals” [15].

Jodi Halpern and Richard Kravitz offer a discussion on the role of “health advocacy organizations” in the dissemination and interpretation of research evidence and consider the special case when interpretations of research evidence conflict. Joanna Siegel at the Agency for Healthcare Research and Quality (AHRQ) and her colleagues at the American Institute for Research (AIR) outline methods of “public deliberation” to engage the public in decisions about health research. They note a recent study conducted by AHRQ to learn about ways to gather public input about the use of medical evidence in guiding health care practice [16].

Beneficence and nonmaleficence: Under what conditions, if any, should a physician disregard codified “best evidence” for the benefit of an individual patient? And how should physicians make decisions in the best interests of their patients in the absence of good research evidence?

Concerns about implementing EBM become particularly pronounced when clinical practice guidelines, quality measures, and reimbursements are linked to scientific research evidence. William Dale and Erika Ramsdale elaborate on how these concerns apply to providing care for older patients, whose health is often particularly complex and difficult to generalize and who are often excluded from participation in research for these reasons. Valarie Blake examines a related issue: the role of clinical practice guidelines in the courts.

Salima Punja and Sunita Vohra contribute a description of “n-of-1 trials,” which have been used in clinical practice to obtain patient-specific research evidence. The method was first implemented by Gordon Guyatt and David Sackett in the 1980s [17] and is currently used in clinical services around the world, including one at the University of Alberta headed by Dr. Vohra [18].

Evidence-based medicine is, at its foundation, about medical education—not only does EBM emphasize the role of traditional medical training to disseminate its new methods [19], but it requires all doctors to engage in a continual process of education in order to make use of current research evidence. Therefore, issues surrounding the use of EBM should be the concern of trainees, educators, and anyone else interested in the education of today’s physicians.

The contribution to this issue by Ariel Zimerman, a medical historian and physician, traces EBM’s roots in medical education at McMaster University. And an essay written by Martha Carvour (who, I should note, taught me much of what I know about evidence-based medicine) provides suggestions for teaching medical students how to think critically about incorporating research evidence into clinical practice. Dien Ho provides a brief account of some ways that “evidence” has been conceptualized by philosophers of science throughout history.

Finally, Ross Upshur’s commentary brings to a point the message that readers should take away from the whole of this issue: evidence-based medicine education should be integrated with an education in clinical ethics.

As the articles in this issue make clear, EBM is a powerful tool with the potential to improve clinical decision making and, ultimately, the health of patients. But, as Guyatt and his colleague Victor Montori have noted, EBM can also be dangerous when used inappropriately [7].

Evidence-based medicine can be thought of like a scalpel or a potent drug with possible adverse effects. The effect of an “evidence-based” approach to medicine depends upon accurate and appropriate integration of research evidence into patient care. Using EBM requires precision, attention, and humility.

Like the other tools and techniques of modern medicine’s armamentarium, the use of EBM should require deliberate, thoughtful, and mentored experience. This is not a new idea [19]. But it is an important consideration for those entrusted with teaching the next generation of physicians both the science and the art of medicine.

References

  1. Bean RB, Bean WB. Sir William Osler: Aphorisms from his Bedside Teachings and Writings.

  2. Guyatt GH. Evidence-based medicine. ACP J Club. 1991;114:A-16.

  3. Association of American Medical Colleges. Basic science, foundational knowledge, and pre-clerkship content: inclusion of topics in required and/or elective courses. Accessed December 1, 2012.

  4. Nasca TJ, Philibert I, Brigham T, Flynn TC. The next GME accreditation system--rationale and benefits. N Engl J Med. 2012;366(11):1051-1056.
  5. Guyatt G, Cook D, Haynes B. Evidence based medicine has come a long way. BMJ. 2004;329(7473):990-991.
  6. Haynes RB, Devereaux PJ, Guyatt GH. Clinical expertise in the era of evidence-based medicine and patient choice. ACP Journal Club. 2002;136(2):A11-A14.
  7. Montori VM, Guyatt GH. Progress in evidence-based medicine. JAMA. 2008;300(15):1814-1816.
  8. Sackett DL, Straus SE, Richardson WS, Rosenberg WM, Haynes RB. Evidence-based Medicine: How to Practice and Teach EBM.

  9. Sackett DL, Rosenberg WM, Gray JA, Haynes RB, Richardson WS. Evidence based medicine: what it is and what it isn’t. BMJ. 1996;312(7023):71-72.
  10. Norman GR. Examining the assumptions of evidence-based medicine. J Eval Clin Pract. 1999;5(2):139-147.
  11. Guyatt GH, Oxman AD, Vist GE, et al. GRADE: an emerging consensus on rating quality of evidence and strength of recommendations. BMJ. 2008;336(7650):924-926.
  12. Goodman KW. Comment on M.R. Tonelli, “the challenge of evidence in clinical medicine”. J Eval Clin Pract. 2010;16(2):390-391.
  13. Fagerlin A, Zikmund-Fisher BJ, Ubel PA. Helping patients decide: ten steps to better risk communication. J Natl Cancer Inst. 2011;103(19):1436-1443.
  14. Freedman DH. Lies, damned lies, and medical science. The Atlantic.November 2010. http://www.theatlantic.com/magazine/archive/2010/11/lies-damned-lies-and-medical-science/308269/. Accessed December 12, 2012.

  15. Prasad V, Cifu A, Ioannidis JP. Reversals of established medical practices: evidence to abandon ship. JAMA. 2012;307(1):37-38.

  16. Agency for Healthcare Research and Quality (AHRQ). AHRQ Community Forum. http://effectivehealthcare.ahrq.gov/index.cfm/who-is-involved-in-the-effective-health-care-program1/ahrq-community-forum/. Accessed December 1, 2012.

  17. Guyatt G, Sackett D, Taylor DW, Ghong J, Roberts R, Pugsley S. Determining optimal therapy—randomized trials in individual patients. N Engl J Med. 1986;314(14):889-892.
  18. Kravitz RL, Duan N, Niedzinski EJ, Hay MC, Subramanian SK, Weisner TS. What ever happened to N-of-1 trials? Insiders’ perspectives and a look to the future. Milbank Q. 2008;86(4):533-555.
  19. Evidence-Based Medicine Working Group. Evidence-based medicine. A new approach to teaching the practice of medicine. JAMA.1992;268(17):2420-2425.

Citation

Virtual Mentor. 2013;15(1):4-8.

DOI

10.1001/virtualmentor.2013.15.1.fred1-1301.

The viewpoints expressed in this article are those of the author(s) and do not necessarily reflect the views and policies of the AMA.