State of the Art and Science
Feb 2017

Reasonableness, Credibility, and Clinical Disagreement

Mary Jean Walker, PhD and Wendy A. Rogers, BMBS, PhD
AMA J Ethics. 2017;19(2):176-182. doi: 10.1001/journalofethics.2017.19.2.stas1-1702.

Abstract

Evidence in medicine can come from more or less trustworthy sources and be produced by more or less reliable methods, and its interpretation can be disputed. As such, it can be unclear when disagreements in medicine result from different, but reasonable, interpretations of the available evidence and when they result from unreasonable refusals to consider legitimate evidence. In this article, we seek to show how assessments of the relevance and implications of evidence are typically affected by factors beyond that evidence itself, such as our beliefs about the credibility of the speaker or source of the evidence. In evaluating evidence, there is thus a need for reflective awareness about why we accept or dismiss particular claims.

Introduction

Medical practitioners rely on evidence, but evidence can come from more or less trustworthy sources, be produced by more or less reliable methods, and its interpretation can be disputed. It can be difficult to tell when disagreements about the appropriateness of medical interventions result from different, but reasonable, interpretations of the evidence and when they result from unreasonable dismissal of legitimate evidence. Here, we draw on scholarship about judging the credibility of claims developed by the epistemologist Miranda Fricker. We provide a brief outline of her analysis of credibility judgments and apply it to two disputes—over vertebroplasty and vaccination—showing how assessments of evidence can be affected by background beliefs about the credibility of speakers or methods based on implicitly held “credibility” heuristics. We argue that clinicians need to exercise reflective awareness about why they accept some claims and dismiss others in order to properly assess whether these judgments are justified.

Judging Credibility

Fricker provides a detailed discussion of how we apportion credibility to the claims made by those around us in the course of everyday life [1]. Her approach is apt for thinking about how medical practitioners may assess claims, since busy clinicians are unlikely to have the time to engage in in-depth assessments of evidence for many particular treatments, much as they might like to do so. She notes that while we can sometimes assess claims directly, by looking at whether they plausibly fit available evidence, most of our knowledge comes from other people. We could not obtain all the knowledge we need without utilizing such knowledge. Thus instead of directly assessing every claim, we often rely on judging whether particular speakers are credible—whether they are (likely to be) competent and sincere. For example, I believe my clinician over my hairdresser on matters of health (given her training) but my hairdresser over my clinician about day-to-day life in the Philippines (since she grew up there).

Often, however, we need to assess claims without having any knowledge about the speaker relevant to assessing his or her credibility. This can be addressed in some cases by relying on reputation or professional position as proxies for relevant personal information, but often even this will be unavailable. Thus, in many cases, we assess credibility by categorizing speakers and drawing on background knowledge about that category [1]. Thus I am likely to believe not just my clinician, but anyone in the category “clinician” on health matters, given my background knowledge about this category. That is, in assessing credibility we rely on heuristics: rough-and-ready rules about what categories of people are likely to be reliable sources about particular matters. My implicit heuristic in this example is that “clinicians are reliable sources about health matters.” Heuristics can attach to any category, not only professions: I would believe a Parisian over a tourist about directions from the Eiffel Tower to the Louvre, parents over those without children on statements about parenting, and so on.

Problems arise if credibility heuristics are themselves incorrect. Fricker argues that we sometimes adopt incorrect heuristics due to social prejudices. Credibility may be apportioned on the basis of potentially irrelevant features of speakers such their sex, race, class, and so on. If a racist society encourages a racist heuristic, such as “people with black skin often lie,” judgments of credibility accordingly become biased [2]. Heuristics can also cause problems because they are generally but not invariably true. My “clinician heuristic,” for instance, is likely to be reliable overall but could occasionally fail if a particular clinician is misinformed or biased. Despite their potential to mislead those who use them, rough, implicitly held credibility heuristics are relied on because they are quick and easy to use.

The Dispute over Vertebroplasty

Vertebroplasty is the injection of bone cement into a fractured vertebral body to treat pain following acute osteoporotic fracture. Vertebroplasty achieved positive results in clinical practice and in retrospective and nonrandomized studies published in the early 2000s [3]. Following dissemination of this evidence, it became a standard treatment. (Surgical procedures do not require regulatory approval and can be widely adopted without “high-level” evidence.) In 2009, two randomized controlled trials (RCTs), which are usually considered the gold standard in medical research, showed vertebroplasty to be no better than placebo [4].

Some researchers—who had extensive clinical experience with vertebroplasty—disputed the RCT results, claiming the research was conducted on the wrong population [5]. The majority of the participants in the two RCTs had experienced pain for more than six weeks. The disputants claimed that vertebroplasty is most efficacious for patients with pain of less than six weeks’ duration. They argued that initial pain following vertebral fracture is caused by movement of the fracture fragments and may be present until the fracture heals, while pain of longer duration has a different cause, e.g., biomechanical strain. Vertebroplasty cements fracture fragments together, so it works only for unhealed fractures [5].

In response, those who favored accepting the RCT results provided reasons for preferring RCT evidence over that deriving from other experimental designs, clinical experience, or mechanistic reasoning [4]. RCTs include a control as well as a treatment arm, allowing researchers to identify whether outcomes can be attributed to the intervention. Blinding participants and researchers prevents biases arising from placebo effects or clinicians’ expectations from affecting the study outcomes. Moreover, randomization prevents bias in the allocation of research participants and controls for the influence of unknown confounders [6-7]. Other experimental designs, clinical experience, and mechanistic reasoning do not control for these potential biases. Indeed, some proponents of the RCT results claimed that disputing those results was merely a reflection of the “strength of clinicians’ placebo reactions” [8]—i.e., that the dispute was unreasonable, itself motivated by bias.

Is the Dispute Reasonable?

In this case, assessment of the evidence involves a heuristic concerning the reliability, not of speakers, but of methods of evidence generation [1]. The considerations above provide reason to accept the heuristic that “evidence generated using RCTs is more likely to be reliable than evidence from clinical experience, mechanistic reasoning or other experimental designs.” Having made the heuristic and the reasoning behind it explicit, we can see that it is well supported. Yet it is a general, not an absolute, rule. One can consistently accept this heuristic and recognize that some RCT results will be incorrect. RCTs can be fraudulent or badly conducted, use inappropriate endpoints, or test nonoptimal versions of a technique or inappropriate populations. Indeed, the limitations of RCTs are well-known. For instance, they provide no information about how correlated variables are causally related and require a methodological rigor that makes generalizing their results to diverse populations problematic [6-7, 9].

By identifying and examining the heuristic underlying the claim that disputing the RCT results was unreasonable, we can see that this claim must be tempered. Although the RCT heuristic is rationally based, it is a general rather than an absolute rule. Thus, recognizing the robustness of RCT evidence does not imply that it is unreasonable to question whether patients’ symptom duration makes a difference to the efficacy of vertebroplasty.

Disputes about Vaccination

This complex set of disputes can be loosely framed as disagreement between a “mainstream” medical establishment (composed of health professionals, researchers, and government health officials) and vaccine critics. The former group claims that vaccines that are in use are safe and effective. The latter hold various views, ranging from concerns about safety issues related to manufacturing processes, to belief in a right to refuse medical treatment, to claims that the medical establishment has been either deceived or corrupted by pharmaceutical companies with financial interests in widespread vaccination [10-11].

The hope that disputes about vaccination can be settled by evidence is complicated by the existence of different bodies of research evidence. The disputants can each cite evidence supporting their view while dismissing conflicting evidence, and they do not always agree on standards for judging evidence [10-11]. It is common for disputants to cast doubt on the reliability of the researchers who conducted particular studies by accusing them of bias related to research funding sources. Mainstream researchers are often government or industry-funded, while vaccine-critical researchers are sometimes funded by vaccine-critical groups [12-13].

Credibility and Categorization

One particular thread of this dispute illustrates the extent to which a dispute can be influenced by how a speaker is categorized and how a category of speakers is perceived. Some vaccine critics hold extreme views about vaccination itself (e.g., that it is a conspiracy to poison our children), or about other matters (e.g., that the government manipulates people and the environment by releasing various chemicals through airplanes) [14]. Holding extreme views tends to lower the speaker’s overall credibility [12]. In Fricker’s terminology, we place speakers in a category, “the vaccine-critical,” and adopt a heuristic that “vaccine critics are not reliable sources.” The reasoning implicit behind this heuristic seems to be that people who accept unlikely or odd claims are not reliable sources.

But that a claim seems unlikely or odd is not always a good indication that it is incorrect. Claims that were in the past considered “crazy conspiracy theories” have turned out to be true (e.g., Watergate [15]). The view that the medical establishment is deceived or corrupt does seem unlikely, since it would involve so many people being significantly influenced. Yet it is uncontroversial that available medical evidence is affected by funding mechanisms, conflicts of interest, and publication biases [16]. There is at least some reason to think that mainstream medical knowledge could be distorted. The heuristic at work in the vaccination dispute—namely, “vaccine critics are not reliable sources”—may turn out to be difficult to rationally support.

A further problem with the heuristic is the breadth of the category. Some “vaccine critics” hold more moderate views, and some are hesitant about vaccines due to uncertainty. These people may be unfairly dismissed due to being placed in this category, leading to potential harms. For instance, if a clinician responds to parental hesitancy about vaccines with anger instead of information because she interprets uncertainty as an “antivaccine” stance, this could lessen parents’ trust in her and even contribute to their developing such a stance [11].

Of course, these problems with the heuristic do not imply that the vaccine critics’ claims should be accepted. They show only that one reason that vaccine critics’ claims are often judged to lack credibility does not have strong rational support. This case further shows that categorizations can sometimes lead to unfairly dismissing or misinterpreting the claims of others in ways that are unhelpful.

Conclusion

Assessments of clinical evidence can be strongly influenced by rough and largely implicit heuristics about those making the claims, the groups to which they belong, or methods of evidence gathering. For these reasons, in assessing medical disagreements, it is helpful for people to reflect on and critically evaluate the heuristics that underlie their judgments of credibility and what those heuristics really justify.

References

  1. 2007;

    Fricker M. Epistemic Injustice: Power and the Ethics of Knowing. Oxford, UK: Oxford University Press; 2007.

  2. This heuristic is based on Fricker’s analysis of a line from Attiticus Finch’s closing speech in To Kill a Mockingbird, “The witnesses of the state have presented themselves to you … confident that you gentlemen would go along with them on the assumption—the evil assumption—that all Negroes lie.” See Lee H. To Kill A Mockingbird. New York, NY: Harper Collins; 1999:233.

  3. Klazen CA, Lohle PN, de Vries J, et al. Vertebroplasty versus conservative treatment in acute osteoporotic vertebral compression fractures (Vertos II): an open-label randomised trial. Lancet. 2010;376(9746):1085-1092.
  4. Buchbinder R, Osborne RH, Kallmes D. Invited editorial provides an accurate summary of the results of two randomised placebo-controlled trials of vertebroplasty. Med J Aust. 2010;192(6):338-341.
  5. Clark WA, Diamond TH, McNeil HP, Gonski PN, Schlaphoff GP, Rouse JC. Vertebroplasty for painful acute osteoporotic vertebral fractures: recent Medical Journal of Australia editorial is not relevant to the patient group that we treat with vertebroplasty. Med J Aust. 2010;192(6):334.

  6. Cartwright N. Are RCTs the gold standard? BioSocieties. 2007;2(1):11-20.

  7. Cartwright N. What are randomised controlled trials good for? Philos Stud. 2010;147:59-70.

  8. Miller FG, Kallmes DF. The case of vertebroplasty trials: promoting a culture of evidence-based procedural medicine. Spine (Phila Pa 1976). 2010;35(23):2025.

  9. Clarke B, Gillies D, Illari P, Russo F, Williamson J. Mechanisms and the evidence hierarchy. Topoi. 2014;33(2):339-360.
  10. Dare T. Disagreement over vaccination programs: deep or merely complex and why does it matter? HEC Forum. 2014;26(1):43-57.

  11. Navin M. Competing epistemic spaces: how social epistemology helps explain and evaluate vaccine denialism. Soc Theory Pract. 2013;39(2):241-264.
  12. Kirkland A. The legitimacy of vaccine critics: what is left after the autism hypothesis? J Health Polit Policy Law. 2012;37(1):69-97.

  13. Wolfe RM, Sharp LK. Anti-vaccinationists past and present. BMJ. 2002;325(7361):430-432.
  14. Gardner Z. Chemtrail crimes: human hybridization and aerial vaccinations. Aircrap.org. http://www.aircrap.org/2016/03/10/chemtrail-crimes-human-hybridization-aerial-vaccinations/. Published March 10, 2016. Accessed November 11, 2016.

  15. Coady D. What to Believe Now: Applying Epistemology to Contemporary Issues. Malden, MA: Wiley-Blackwell; 2012.

  16. Goldacre B. Bad Pharma: How Drug Companies Mislead Doctors and Harm Patients. New York, NY: Farrar, Straus and Giroux; 2013

Citation

AMA J Ethics. 2017;19(2):176-182.

DOI

10.1001/journalofethics.2017.19.2.stas1-1702.

Acknowledgements

The first author’s work on this paper was supported by the ARC Centres of Excellence funding scheme (CE140100012).

The views expressed are those of the authors and are not necessarily those of the Australian Research Council.

The viewpoints expressed in this article are those of the author(s) and do not necessarily reflect the views and policies of the AMA.