Tarnished Gold Chapter 4: Beating The Odds

Finally, a chapter I somewhat agree with.

This chapter discussed the difficulties in understanding probability. The examples they use aren’t good analogies for clinical probabilities but are interesting nonetheless.

Picture of a quote: its all relative

From QuoteAddicts.com

I’ll focus on what I agree with for this post. They discuss the misleading nature of reporting relative risks (and relative risk reductions also) in research reports. This is a real problem as clinicians often don’t understand that while the relative risk/benefit of an intervention is fairly constant across patient subgroups the absolute benefits aren’t. In general, if something is beneficial the sicker you are the more benefit you gain. For example, let’s say a treatment has a relative risk reduction for death in the next year of 75% (RR of 0.25) and we have 2 patients we are seeing. One has a risk (or probability) of death of 50% without the intervention and the other has a risk of death of 10%.  If patient one is given the treatment her risk is reduced from 50% to 12.5% (to see how I did this watch this video). If patient two is given the treatment his risk is reduced from 10% to 2.5%. So the absolute benefit is greater for patient one (37.5%) than for patient two (7.5%) even though the relative benefit is the same (75%). This is often a difficult concept for physicians to understand but once mastered is a useful way to discuss the benefits and harms of a proposed intervention with patients. Furthermore, it’s patient specific.  To get the probability of an outcome for an individual patient you could use a validated clinical prediction rule, the placebo rate from a trial, the results from studies of disease frequency (though these are rare) or, as a last ditch effort, guesstimation.

Overcoming Probability Inflation

Benjamin Roman, MD, MSPH wrote a wonderful piece in this week’s New England Journal of Medicine. It might not get read much because it is listed way down the table of contents but I think it is more clinically important than any other piece in the journal this week. He tells of his own story of having sudden sensorineural hearing loss and agreeing to an MRI even though the probability of him having a serious cause of the problem was low, the cost of the test (MRI) was high, and the benefit of treatment was minimal (in fact, many don’t need treatment). Furthermore, he is an ENT physician and knows all this but still underwent testing anyway- mainly because his wife wanted him to!

He outlines an important problem in medicine for both physicians and patients: probability inflation.

This problem arises from the way we deal emotionally (added for emphasis) with risk and uncertainty, which are givens in health care, and the way we make decisions in the face of low-probability outcomes.

Emotions are a large part of the problem; the affect heuristic. When we make decisions we often consider it analytically but also from the standpoint of how we feel about it. If we have positive feelings about the situation we magnify the probability of benefit or, conversely, reduce the magnitude of harm. Think about Dr. Roman’s situation. He (or at least his wife was) was worried about something bad happening (ie having an acoustic neuroma) but understood that was pretty unlikely to be the case. But what if he didn’t do the MRI and he actually had a treatable one that would be missed. He had strong feelings (or at least his wife did) that he didn’t want to miss the acoustic neuroma. Or maybe he would be relieved that he didn’t find one (that’s is a strong positive emotion isn’t it) if the MRI was negative (assuming the sensitivity is good enough). Thus, the acoustic neuroma’s probability becomes artificially inflated. He probably didn’t even think about the downstream effects of finding one and the risks associated with having surgery or radiation (which probably outweigh the benefits of finding it if I had to guess).

Many of us fear the uncertainty almost more than the disease itself. We want to know even if we can’t act on the information we are given. We also like doing something. At least we will go down fighting. This affects both physicians and patients. We order things we shouldn’t. Patients request things they shouldn’t. Sometimes its because of poor reasoning skills. The affect heuristic gets us. Sometimes its more practical as Dr. Roman notes:

My doctor’s recommendation was based on a similar reaction. Besides wanting to reassure himself and his patients that there is no acoustic neuroma, he told me, another reason he suggests MRIs in situations like mine is that he fears being sued should he fail to order one and end up missing something. He noted that court malpractice awards for missed acoustic neuromas commonly reach into the millions of dollars and that until we agree to an acceptable miss rate and physicians are no longer liable for missing just a single such case, their practices will not change. I’m not sure how common such verdicts are, but this rationale also reflects risk aversion in the face of a low-probability bad event — it’s simply the doctor’s risk that’s at issue, rather than the patient’s. (emphasis added)”

That last statement is telling. It’s a shame so much of medicine revolves around covering our proverbial asses.

Dr. Roman offers some solutions:

  1. comparative effectiveness and outcomes research (this exists for many things but gets ignored)
  2. educating doctors about how to discuss uncertainty, risk, and probability (First, doctors need to be taught these principles before they can teach anyone else. I see first hand on a daily basis how little of this is understood)
  3. addressing emotions and psychology of patients and physicians (good luck dealing with emotions….. anyone have a teenage child?)
  4. nudging each other to do the right thing
    • consumers share cost of things they want that are marginal (good idea for sure)
    • government (either local or national) regulation (Hell no! More bureaucracy is not needed and will only raise costs even more)

As Dr. Roman points out all of these need to be done but the devil is in the details. HOW? I think the focus of these solutions is from a society or community perspective and physicians mainly feel a duty to only one individual- the individual sitting in front of them. That relationship is powerful and affects decision making.

My dad had advanced dementia and fell in his bathroom suffering a tibial plateau fracture. The surgeon wanted to fix it surgically as this would give my dad the best chance to walk (though he couldn’t actually tell me the probability). The only other option was splinting and rehab.  Thankfully, I know enough about dementia and specifically my dad’s dementia to know he would never be able to participate in rehab and I knew he would never be able to keep the wound clean and stay off his leg until it healed. I decided not to do the surgery and opted for rehab and splinting. My dad never walked again. He couldn’t understand how to do rehab or to use a walker. I made the right decision because I think the ultimate outcome would have been the same either way- not walking. I have no way of knowing. It was a decision under uncertainty. I saved his insurance and Medicare a lot of money. That wasn’t my goal. My goal was to maximize outcomes in the most resource-sensitive way that would harm my dad the least. I felt surgery would be more harmful than not doing the surgery. Should the surgeon have even offered to do surgery? Should he have just said that splinting was the best for someone like my dad with advanced dementia? When he offered surgery did he really thing it would help or was it because he was a surgeon and that’s what they do?

Like all complex problems the solutions are equally if not more complex. I will continue to do my small part of educating who I can on EBM principles and hopefully a few of my learners will make good decisions.