top of page
Search

I WANT TO BELIEVE


We all have beliefs about the world. Some we may have spent significant time establishing – for example, pursuing a degree in a particular field of study or passively adopted beliefs from years of social-cultural indoctrination. Sometimes we may even hold beliefs that we aren’t entirely sure where they developed from, or what evidence we used to establish them. Enter Westworld: “Have you ever questioned the nature of your reality?”


Introduction

About 17 years ago, I was convinced there was something “wrong” with my left shoulder necessitating surgical treatment. Around the same time, I had bought into the Muscular Development style of training, and read about bodybuilder splits, meal timing, protein consumption windows of opportunity, exercises framed as good vs. bad / injurious vs. non-injurious, how to be like the pros, etc.


About 11 years ago I was looking to better understand developmental coordination disorders in children using fMRI/MRI to peer into the brain to identify problems that may be fixed. You could also catch me ranting at the time about how an exercise isn’t “bad”, but it’s merely how YOU perform the exercise that is “bad”, promoting perfect ways to breathe, squat, deadlift – or move, in general. Other prevalent beliefs at the time involved seeking to create symmetry to mitigate pain (Janda), or how we are under-trained in the transverse plane.


In 2012, I would have tried to convince you that CrossFit was THE way to exercise, and “Paleo” was THE way to eat for your goals — although oddly in 2008 I would have contradicted myself regarding CrossFit while teaching an undergrad class for strength and conditioning. I would have also likely convinced you that you needed joint manipulations, scraping, kinesiology tape, foam rolling, stretching, etc. to “get yourself out of pain”, as these are also interventions I sought for myself to deal with the experience of pain or to “improve performance”.


In 2015, I would have been convinced that pain was an “output of the brain”. Today, I tend to follow two quotes for these discussions, the first from Richard Feynman, American Physicist: “The first principle is that you must not fool yourself and you are the easiest person to fool.” The second quote is from Carl Sagan, an astronomer amongst many other things: “Extraordinary claims require extraordinary evidence.”

This trip down memory lane isn’t isolated to just my beliefs. I am certain we can all reflect on various beliefs we’ve held in life that we now cringe when thinking about. I’m also willing to bet that in ten years if I re-read this piece or others I’ve written where I thought I knew more or was more enlightened … I’ll cringe then too.

More to the point of this month’s article, we all hold beliefs about the world that influence our daily lives and decision making as well as those around us, the question becomes: what evidence are we using to substantiate these beliefs?


Epistemology is the study of knowledge and justified belief. In essence, we are seeking to answer the question “How do we know what we think we know?” We previously recorded a podcast on epistemic responsibility that can be found HERE.

I’m also a big fan of seeing epistemic conversations in action via Street Epistemology with Anthony Magnabosco. In his YouTube series, Anthony speaks with folks out and about at universities/colleges or local walking trails and simply asks them to state a belief or claim they hold to be true and then together they will explore the evidence. Anthony spends time asking open ended questions to try and identify the evidence a person uses to substantiate their belief. He has become, in my opinion, a Jedi master at exploring beliefs in a non-confrontational or overly emotional manner. Although he is open to discussing anything, often these discussions involve political or religious topics, as these tend to be areas with strongly held beliefs.


Unsurprisingly, many people have adopted beliefs from tradition (sociocultural underpinnings), aren’t sure of the origins of their belief, or have blindly accepted the source or evidence for the belief as “Truth”. This can be an uncomfortable process for someone not used to questioning their beliefs and then having to provide answers to substantiate why they believe what they believe. I could turn this into a rant of schools needing to teach us how to think, but I’ll save that discussion for another time. This does bring up an important question that the reader may be asking, what is “Truth”?


I don’t presume to have an answer to this question, and I don’t pretend to think we will ever know “Truth”. However, I do believe we have various methods to try and move ourselves closer to “Truth” to make better, more informed decisions in life – one of which is science. I like Paul Offit’s definition of science:

“Stripped to its essence, science is simply a method to understand the natural world–it’s an antidote to superstition.”

Before we discuss methodology for moving us closer to “Truth”, we need to have a discussion about evidence. In Anthony’s videos, he often comes to realize that a person’s evidence is word of mouth and hasn’t really been examined for validity. This leads us to the question – what is evidence? We will frame this discussion through the lens of healthcare and clinical practice.




Evidence

In the 90s and early 2000s a group of individuals formed what was known as the Evidence Based Medicine Working Group, designed to champion a new paradigm in healthcare based on a “hierarchy of evidence” for collaborative decision making with patients. The premise was to move beyond solely utilizing experience, tradition, or perceived bio-plausibility to rationalize healthcare practices and instead incorporate research evidence to assist decision making when possible.


One of the first papers released in the Journal of American Medical Association was in 1992 titled Evidence-based medicine. A new approach to teaching the practice of medicine. In this article the working group outlines what EBM is and compares the approach to the old healthcare model. They also discuss ways to integrate this new approach into healthcare, specifically via a residency program. From here a series of articles are released dubbed, The Users’ Guide to the Medical Literature (the first of the series can be found HERE). In 2000, the working group proposed a broad definition for evidence:

“Any empirical observation about the apparent relationship between events constitutes potential evidence.”Guyatt 2000

“Potential” is the key adjective in this definition. Our clinical observations certainly can be utilized as evidence for decision making, however, relying solely on our observations has risks and flaws that we will discuss later. This particular definition invokes the idea of a hierarchy to evidence. We are all likely familiar with figure 1 below as the usual evidence based pyramid.


Figure 1: Evidence Based Practice Hierarchy of Evidence from 1991 – 2004.


A common counter to evidence based practice is that often there’s no evidence for or against “X” (often labeled an argument from ignorance or absence of evidence). However, as the evidence based medicine working group points out:

“The hierarchy makes it clear that any statement to the effect that there is no evidence addressing the effect of a particular treatment is a non-sequitur [logically doesn’t follow]. The evidence may be extremely weak – the unsystematic observation of a single clinician, or generalization from only indirectly related physiologic studies – but there is always evidence.” Guyatt 2000

However, the idea there is always evidence has risks of its own. It can lead to believing “Do whatever you want and anything works”, whereas in research, investigators are often examining the continuum of efficacy to effectiveness. Briefly, efficacy is defined as “…performance of an intervention under ideal and controlled circumstances” and effectiveness equates to “…performance under ‘real-world’ conditions.” Singal 2014 However, as Singel et al discuss, we are likely not able to conduct a “pure” efficacy or “pure” effectiveness trial. With that said, hopefully well conducted studies can control for as many confounders and secondary effects (e.g., placebo/nocebo contextual or meaning effects) as possible to distinguish outcomes/effects attributable to the intervention itself.


Much of this discussion will be dependent on context and the immediate threat to an individual or population, as we are seeing with COVID-19. I’m going to refrain from speaking too much on the current pandemic as this would detract from our current discussion, but as global society searches for ways to either treat those with the infection or minimize risk of disease, we’ve been inundated with mixed information and claims to treatments to start ahead of clinical studies (e.g., hydroxychloroquine). The usual trope is “What’s the harm?” while overlooking adequate weighting of risks vs benefits.


Why do we need evidence?

Two major reasons worth focusing on the need for evidence:

  1. Move us closer to “Truth” – for our context what we should be doing as clinicians, coaches, and individuals related to a particular recommendation or intervention

  2. Minimize the anchoring effect – The anchoring effect occurs when we hinge ourselves to a single construct, idea, or piece of evidence and make future decisions based on the original information, even if new or contradictory information arises along the way. Said differently, all future decision making is based on the original anchor as the context rather than exploring other pieces of evidence to reframe and update our beliefs. One way of thinking about this would be related to my doctorate education which was heavily anchored to the construct of vertebral subluxations. As a clinician, this would be a very easy anchor to utilize as a lens for viewing patient complaints – searching for the vertebral subluxation to correct and thus achieve an end goal (e.g., pain relief or, as the old adage goes, “lack of dis-ease”).


Why research evidence?

Above we said that almost any observation may serve as potential evidence regarding the relationship between two events. There are underlying philosophical concepts here, but I will try to keep this discussion pragmatic.


We use our observations and information gained from our senses on a daily basis. An easy example would be waking in the morning to the sounds of water droplets hitting the roof of your house and thinking – it must be raining outside. You then walk to the window by your bed, peer out to find a grey sky, water droplets falling, and pooling water on the ground – your observations further confirm your suspicion that it is indeed raining outside. You then use the sensory information you gained (i.e., empirical data) to make future decisions. Perhaps you forgo walking or cycling to work today, and instead drive. Maybe you wear a rain jacket and bring an umbrella.


Where this can get interesting is if you are suspicious of rain, don’t have direct observable sensory information it is currently raining, but you check your weather app on your iPhone and find a 60% chance of rain. How does this affect your decision making? When weighting risks versus benefits of your decision making for this situation, the risk is restricted (for the most part) to the individual level – for example, if you decide to forgo the rain jacket or umbrella and subsequently get wet (obviously others may suffer if you then become upset with this and are bad tempered the remainder of the day). However, what happens if we transpose this to the clinical setting where decisions affect more than just you?


A patient may present with acute onset low back pain; what do you do next? Obtain radiologic imaging? What are the risks versus benefits of imaging? What’s the probability you will find something on imaging, and how does that “something” fit into the context of the patient’s case? Suddenly, our decision making affects more than ourselves (yes, ideally this is shared decision making). Layering in our framework for clinical practice (e.g., biomedical vs. biopsychosocial vs. some other construct), and perhaps we decide we must peer beneath the surface with imaging to find spinal alterations we’ve dubbed spinal “degenerative changes”. What’s the harm in such narratives, and is this information more harmful or beneficial for the individual’s management and prognosis?


Suppose we decide that the individual simply has “inflammation” and needs an anti-inflammatory, or perhaps we believe a vertebra is misaligned and needs to be “put back in place”. What supportive or contradictory evidence do we have for these interventions, and has the clinician reviewed this evidence? Herein lies the purpose of Evidence Based Practice.


Unfortunately, as highly evolved as we are as humans, we come with our own set of cognitive biases that influence our beliefs and daily decision making. It is extremely unlikely we will rid ourselves of these biases BUT if we are aware of them perhaps we can minimize their effect on our lives. A major rationalization for research evidence is to help stifle these individual cognitive biases with systematic, controlled, and hopefully replicated research studies. I freely admit that there are systematic issues throughout the conduction of science via research. The major flaw is that humans are the ones conducting science, and as we will find, our observations and thinking aren’t always the most reliable.


However, to the best of my knowledge, this remains our primary method for learning about and understanding the world around us — and in the clinical context, helping make more informed decisions in collaboration with patients.


In the next part of this series we will discuss cognitive biases and logical fallacies in hopes that discussing them we can then be more aware of them – as a caveat these aren’t meant to be utilized against others as a finger pointing in discussions to say “Ha, I got you – that is a ‘X’ fallacy – CHECKMATE” – as is often seen on the internet.


Stay tuned!


Key Takeaways:

  1. Although we all hold various beliefs about the world, we should reflect on the evidence we use to substantiate those beliefs.

  2. It’s important to recognize a hierarchy of evidence while also weighing the quality of evidence being utilized for decision making.

References:

  1. Offit, Paul A. Bad Advice: or Why Celebrities, Politicians, and Activists Arent Your Best Source of Health Information. Columbia University Press, 2018.

  2. Guyatt G. Evidence-Based Medicine JAMA. 1992; 268(17):2420-.

  3. Oxman AD. Users’ Guides to the Medical Literature JAMA. 1993; 270(17):2093-.

  4. Guyatt GH, Haynes RB, Jaeschke RZ, et al. Users’ Guides to the Medical Literature JAMA. 2000; 284(10):1290-.

  5. Singal AG, Higgins PDR, Waljee AK. A Primer on Effectiveness and Efficacy Trials Clinical and Translational Gastroenterology. 2014; 5(1):e45-.

196 views0 comments

Recent Posts

See All
bottom of page