In part 2, we discussed logical fallacies and cognitives biases influencing our perceptions of reality and daily decision making. Our final installment will cover evidence-based practice.
What is Evidence-Based Practice?
Now that we’ve discussed a bit about defining evidence, the need for research evidence to overcome cognitive biases and fallacious reasoning, let’s discuss evidence based practice a bit more. As stated previously, the EBP movement was born out of a need to move beyond experienced based practice (“like … my opinion, man”) or supposed bio-plausibility alone, and move more towards integration of research evidence for shared clinical decision making.
Most are likely familiar with the “three legged stool” or “3 pronged” version of EBP and if you’ve ever been involved in discussion on this topic, then a common argument is for equal weighting of each leg. Interestingly, the most well known member of the EBP group, David Sackett, wrote a paper with others in 1996 titled, Evidence Based Practice – What it is and what it isn’t, and in this paper they define
EBP as:
“Evidence based medicine is the conscientious, explicit, and judicious use of current best evidence in making decisions about the care of individual patients. The practice of evidence based medicine means integrating individual clinical expertise with the best available external clinical evidence from systematic research.”
A common counter to EBP is the idea of “cookbook” medicine, and being married to a pastry chef has taught me that we often need a recipe to follow. However, given the fact that clinical practice operates at the individual level, we are then forced to generalize the research evidence on a given topic to the individual’s case. Therein lies the role of clinical expertise: combining current research evidence on a topic with our clinical experience as it relates to an individual’s case, i.e. EBP.
Sackett and colleagues again: “External clinical evidence can inform, but can never replace, individual clinical expertise, and it is this expertise that decides whether the external evidence applies to the individual patient at all and, if so, how it should be integrated into a clinical decision.”
The interesting aspect of this discussion is the utter lack of mentioning of equal weighting that we often hear in this conversation. Instead, of a 3-legged stool approach, I prefer the Pillars of Creation (captured by the Hubbell telescope in 1995 and rephotographed in 2014). The image depicts 3 unequal pillars that I would assign left to right: current best research evidence, clinical expertise, and patient preferences. Interestingly, the image captures a collection of gas and dust attempting to form a new star but nearby stars are emitting light threatening degradation – we can think of this light as new evidence threatening our current beliefs about a topic.
“BuT In mY ExPeRiEncE”
This is also a common argument from a perceived position of authority and as stated earlier, experience has a place but shouldn’t be a stand alone (typically) for decision making. Even in scenarios where we have no available research evidence specific to the issue, we can likely extrapolate and generalize from other related contexts (see current recommendations from WHO/CDC regarding minimizing the spread of COVID-19).
Where does experience tend to get us? Choudhry et al published an article in 2005 titled Systematic Review: The Relationship between Clinical Experience and Quality of Health Care and found: “Relationship between clinical experience and performance suggests that physicians who have been in practice for more years and older physicians possess less factual knowledge, are less likely to adhere to appropriate standards of care, and may also have poorer patient outcomes.”
In other words, it would appear we do worse when relying solely on our flawed observations and reasoning that have accumulated over time. We also can’t forget how often healthcare providers overestimate the benefits of our care and underestimate our harms. Hoffman 2017 This is also likely why we see patients with similar perceptions — overestimating benefits of treatments and underestimate harms Hoffman 2015 — given that in many scenarios we (clinicians) are setting patient expectations and influencing their preferences (there are obviously other influential variables such as the internet, family members, and social network).
If we zoom out, we can see how easily a therapeutic illusion can occur. Defined by Thomas KB as, “…an unjustified enthusiasm for treatment on the part of both doctors and patients, which is a proposed contributor to the inappropriate use of interventions.” Thomas KB 1978 Often a perfect storm of over-diagnosis and over-treatment can occur due to our own confirmation bias + heuristics + therapeutic illusion. A recent editorial outlines quite well how this occurs in clinical practice – Musculoskeletal Healthcare: Have We Over-Egged the Pudding?. Returning to the pastry world, (I verified this with my wife Erica) over-egging pudding would spoil it by using too many eggs or in our case – medical overuse (doing too much). This editorial discusses 4 aspects of over-diagnosis. This issue is complex & multifactorial. Often we are indoctrinated from school to look for supposed issues (See prior discussion on VS), even where we lack evidence there is a true problem. We then base consultations on assessments designed to look for flaws that are merely normative variants/adaptations (posture, ROM, strength/weakness, alignment, lumbar “degeneration”, rotator cuff tears, FAI, meniscus tears, etc.). These erroneous searches (acting as Sherlock Holmes) can lead to false narratives, false beliefs, and erroneous expectations about unnecessary interventions, culminating in maladaptive conditioned behaviors.
Zooming out further, much of healthcare in this realm is based on reduction of pain & validating medical necessity on any pain reporting. Recall the prior narratives described above to validate interventions to treat specific tissue issues. The example of low back pain applies, with many clinicians saying all low back pain has a cause, and that expert assessment can find the cause. This, despite evidence that doesn’t support such ease of thinking as it relates to pain, and instead such an approach causes many downstream consequences. This leads to my last point: a lot of this can be related to the idea that we are better than we actually are in clinical practice, underestimating our harms & overestimating our benefits. “I’m ahead of the evidence! Just come and watch my success in clinical practice.” No, you aren’t — and it’s ok, I’m not either — and if we could stop and slow ourselves down to read the evidence we’d see this.
We must be willing to tolerate uncertainty in clinical practice. In the words of Voltaire, “Uncertainty is an uncomfortable position, but certainty is an absurd one.” According to Simpkin et al, “Too often we focus on transforming a patient’s grey-scale narrative into a black-and-white diagnosis that can be neatly categorized and labeled. The unintended consequence — an obsession with finding the right answer, at the risk of oversimplifying the richly iterative and evolutionary nature of clinical reasoning — is the very antithesis of humanistic individualized patient-centered care. The authors go on to state, “We can speak about “hypotheses” rather than “diagnoses,” thereby changing the expectations of both patients and physicians and facilitating a shift in culture.” I’m extremely skeptical the transition from diagnostic labels to hypotheses will occur anytime soon but hopefully this will at minimal stifle our certainty.
How have we been doing with EBP since the 1990s?
It depends on who you ask. I tend to remain optimistic that the movement is having a beneficial effect on clinical practice, but uptake is still limited. We will likely continue to see an evolutionary process of EBP as we gain new understanding of evidence and how best to integrate into clinical practice. Djulbegovic and Guyatt released an article in 2017: Progress in EBM: A Quarter Century On. Recall our prior evidence hierarchy chart above with expert experience as the base and RCTs as the pinnacle in biomedicine. In the article the authors outline 3 major epistemological updates to EBP:
Not all evidence is created equal
Pursuit of “truth” is best accomplished by evaluating the totality of evidence on a topic
Clinical decision making requires consideration of patients’ values and preferences
Overall, a major takeaway is learning to weigh evidence based on quality of methodology. The authors proposed this update to the concept of a hierarchy of evidence (see above), where we consider methodology of studies such as study design, control/risk for biases, etc. The authors specifically discuss the GRADE criteria (Grading of Recommendations Assessment, Development, and Evaluation), but there are MANY similar type assessment tools/criteria. Furthermore, EBP isn’t without critiques. One in particular comes to mind, John Ioannidis, a Stanford professor who speaks out quite vocally against the hijacking of EBP. To his credit, he does shine a spotlight on several concerning factors regarding how science is conducted by humans, which inherently brings cognitive biases, fallacious/motivated reasoning, and competing interests into play. However, this doesn’t then mean we scrap the approach but instead calls for continued efforts to evolve EBP for the betterment of society.
Take-Home Message
In conclusion, we all have beliefs about the world, the question becomes: what level of evidence are we using to substantiate those beliefs? For the context of guiding others’ lives and decision making (a situation we are often involved in clinical practice), we should use the current best research evidence when possible to inform shared decision making. Having a general understanding of scientific methodology and statistical analysis goes a long way towards assessing evidence and relevance to patient cases. Much of this is similar to improving towards goals in the gym – dedication to the process, completing sets and reps, and practice. Said differently, preemptively carving out time daily/weekly to read research relevant to your field can go a long way to improving at assessing evidence. Like most things in life, there’s no magic bullet here. Finally and of relevance in today’s social media world, we should be cautious as champions of EBP with claims of being “evidence-based” somehow elevates us above others. The phrase has almost become an oxymoron at this point. Perhaps, on some level, we should follow Epictetus’s advice, “Don’t explain your philosophy. Embody it.”
References:
Sackett DL, Rosenberg WMC, Gray JAM, Haynes RB, Richardson WS. Evidence based medicine: what it is and what it isn’t BMJ. 1996; 312(7023):71-72.
Choudhry NK, Fletcher RH, Soumerai SB. Systematic Review: The Relationship between Clinical Experience and Quality of Health Care Ann Intern Med. 2005; 142(4):260-.
Hoffmann TC, Del Mar C. Clinicians’ Expectations of the Benefits and Harms of Treatments, Screening, and Tests JAMA Intern Med. 2017; 177(3):407-.
Hoffmann TC, Del Mar C. Patients’ Expectations of the Benefits and Harms of Treatments, Screening, and Tests JAMA Intern Med. 2015; 175(2):274-.
Thomas KB. The consultation and the therapeutic illusion. BMJ. 1978; 1(6123):1327-1328.
Maher CG, O’Keeffe M, Buchbinder R, Harris IA. Musculoskeletal healthcare: Have we over‐egged the pudding? Int J Rheum Dis. 2019; 22(11):1957-1960.
Simpkin AL, Schwartzstein RM. Tolerating Uncertainty — The Next Medical Revolution? N Engl J Med. 2016; 375(18):1713-1715.
Djulbegovic B, Guyatt GH. Progress in evidence-based medicine: a quarter century on The Lancet. 2017; 390(10092):415-423.
Ioannidis JP. Evidence-based medicine has been hijacked: a report to David Sackett Journal of Clinical Epidemiology. 2016; 73:82-86.
Comentários