I wonder how much EBM is really practiced out there

WARNING: a lot of cynicism in this post.

I have been revamping my EBM course that I teach at the medical school. As I’ve been doing this I realize we (the collective EBM teachers of the world) teach knowledge and skills that I don’t think are used very often once our doctors are out of residency.

Who really develops a PICO question in the clinical setting (outside of an academic center)?  Who is really doing database searches? (I think everyone just goes to Google, UpToDate or Dynamed and doesn’t care if studies are potentially missed.) How many critically appraise the primary literature? (Don’t most probably just read the conclusions from the abstract? or assume the study is good?) How many really understand how to “manipulate” findings of a study to adapt them to the patient they are seeing?

I know this seems like a negative post but practicing EBM is hard. It is a complex task that takes time and feedback to master. Once you leave training there is little feedback you will ever get on EBM skills. So they wane and all that can be done is to keep practicing like they have been by relying on experience, collective knowledge of consultants, and using Dr. Google. But how bad of a service have they provided their patients by doing this? Probably not all that bad.

As an educator I feel these skills are important and I think I have designed my course to provide the best chance for students to remember the material. But I don’t know how to convince practicing docs that they need to keep brushing up on EBM skills. I also don’t know what I would tell them if they asked “Well how do you want me to brush up on my EBM skills?” EBM skills should probably be a reasonably important part of the MOC process. Aren’t these skills key to actually keeping up?

Now its your turn. Tell me where I’m wrong and what should practicing docs do?

SPRINT Trial Misunderstood and Misapplied- Part 1 (Not Knowing Who’s in the Study)

The SPRINT Trial was an important trial for the evidence base in hypertension. Previous studies had shown that intensive BP lowering in patients with type 2 diabetes (<120 vs <140 mm Hg) and in patients who previously had a stroke (<130 vs <150 mm Hg) resulted in no significant benefit in major cardiovascular events (except for stroke in diabetics). The natural question arose about whether tight BP control in patients without diabetes or previous stroke mattered more than less intensive control? This became even more important as JNC-8 recommended less stringent goals than previous JNC guidelines.

Unfortunately I have seen physicians I work with and residents become overzealous in extending SPRINT results to other patient groups, especially those which it excluded. Interestingly, when I question them about SPRINT and who was actually studied they either assumed it was all patients with HTN (because they hadn’t actually read the inclusion/exclusion criteria at all) or knew who it was restricted to but assumed that higher risk patients with diabetes and stroke would equally gain benefit (or even more which seems intuitive).

So my 1st point to make is that you should actually read a study and know who was studied (and importantly who wasn’t) before you start using it.

This seems like an intuitive statement but many of my colleagues and trainees simply haven’t closely examined the study. They have heard abbreviated results in conferences or from faculty in clinic and assume that it applies broadly.

So who was in SPRINT? To be included a patient had to be at least 50 yrs of age, have a systolic BP of 130-180 mm Hg, and be at increased risk of cardiovascular disease (clinical or subclinical CVD, CKD with eGFR 20-59 ml/min, Framingham 10-yr risk >15%, or be over 75 yrs of age).  Patients with diabetes and prior stroke were excluded. Lets see what they looked like by checking out Table 1.

sprint table 1

These patients had pretty good baseline blood pressures and were already on almost 2 anti-hypertensive meds to start. They had fairly good lipid profiles and around 43% were on statins. The majority were nonsmokers and had 20% 10-yr Framingham risk. These patients are somewhat healthier than the patients I see.

Point 2: Compare patients in the study to who you see. Are they sicker or healthier? How would you adjust the results to fit your patients?

Don’t assume the study enrolled the average patient or that your patients will be just like those in the study.

In Part 2 I’ll analyze the intervention and outcome measures of the study.

 

What to do when evidence has validity issues?

I often wonder how different clinicians (and EBM gurus) approach the dilemma of critically appraising  an article only to find that it has a flaw(s). For example, a common flaw is lack of concealed allocation in a randomized controlled trial. Empirical studies show that the effects of experimental interventions are exaggerated by about 21% [ratio of odds ratios (ROR): 0.79, 95% CI: 0.66–0.95] when allocation concealment is unclear or inadequate (JAMA 1995;273:40812). 

bias1

So what should I do if the randomized trial doesn’t adequately conceal the allocation scheme? I could discard the study completely and look for another study. What if there isn’t another study? Should I ignore the data of a perfectly good study otherwise? I could use the study and adjust the findings down by 21% (see above for why) and if the effect of the intervention  still crosses my clinically important threshold then I would implement the therapy. I could use the study as is and assume it wasn’t important because the reviewers and editors didn’t think it was. This is foolish as many of them probably didn’t even recognize the flaw nor would many of them understand the impact.

I don’t have the right answer but wonder what more learned people do. I personally adjust the findings down and determine if I still want to use the information. The problem with this approach is it assumes that in the particular study I am reviewing that the estimate of effect is in fact biased…something I can’t really know.

What do you do?

Is tinzaparin better than warfarin in patients with VTE and cancer or not?

The CATCH trail results were published this week in JAMA. Read the abstract is below. Do you think this drug is useful for venous thromboembolism (VTE) treatment?

Importance  Low-molecular-weight heparin is recommended over warfarin for the treatment of acute venous thromboembolism (VTE) in patients with active cancer largely based on results of a single, large trial.

Objective  To study the efficacy and safety of tinzaparin vs warfarin for treatment of acute, symptomatic VTE in patients with active cancer.

Design, Settings, and Participants  A randomized, open-label study with blinded central adjudication of study outcomes enrolled patients in 164 centers in Asia, Africa, Europe, and North, Central, and South America between August 2010 and November 2013. Adult patients with active cancer (defined as histologic diagnosis of cancer and receiving anticancer therapy or diagnosed with, or received such therapy, within the previous 6 months) and objectively documented proximal deep vein thrombosis (DVT) or pulmonary embolism, with a life expectancy greater than 6 months and without contraindications for anticoagulation, were followed up for 180 days and for 30 days after the last study medication dose for collection of safety data.

Interventions  Tinzaparin (175 IU/kg) once daily for 6 months vs conventional therapy with tinzaparin (175 IU/kg) once daily for 5 to 10 days followed by warfarin at a dose adjusted to maintain the international normalized ratio within the therapeutic range (2.0-3.0) for 6 months.

Main Outcomes and Measures  Primary efficacy outcome was a composite of centrally adjudicated recurrent DVT, fatal or nonfatal pulmonary embolism, and incidental VTE. Safety outcomes included major bleeding, clinically relevant nonmajor bleeding, and overall mortality.

Results  Nine hundred patients were randomized and included in intention-to-treat efficacy and safety analyses. Recurrent VTE occurred in 31 of 449 patients treated with tinzaparin and 45 of 451 patients treated with warfarin (6-month cumulative incidence, 7.2% for tinzaparin vs 10.5% for warfarin; hazard ratio [HR], 0.65 [95% CI, 0.41-1.03]; P = .07). There were no differences in major bleeding (12 patients for tinzaparin vs 11 patients for warfarin; HR, 0.89 [95% CI, 0.40-1.99]; P = .77) or overall mortality (150 patients for tinzaparin vs 138 patients for warfarin; HR, 1.08 [95% CI, 0.85-1.36]; P = .54). A significant reduction in clinically relevant nonmajor bleeding was observed with tinzaparin (49 of 449 patients for tinzaparin vs 69 of 451 patients for warfarin; HR, 0.58 [95% CI, 0.40-0.84]; P = .004).

Conclusions and Relevance  Among patients with active cancer and acute symptomatic VTE, the use of full-dose tinzaparin (175 IU/kg) daily compared with warfarin for 6 months did not significantly reduce the composite measure of recurrent VTE and was not associated with reductions in overall mortality or major bleeding, but was associated with a lower rate of clinically relevant nonmajor bleeding. Further studies are needed to assess whether the efficacy outcomes would be different in patients at higher risk of recurrent VTE.

When I approach a study with marginally negative results I consider several things to help me decide if I would still prescribe the drug:

  1. Was the study powered properly? Alternatively, were the assumptions made in sample size calculations reasonable. Sample size calculations require several data points. The main ones are: desired power, type 1 error rate, expected difference in event rates between the arms of the trial. The usual offender is the authors overestimating the benefit they expect to see. The authors expected a 50% relative reduction in event rates between the 2 arms of the study. That seems high but is consistent with a meta-analysis of similar studies and the CLOT trial.  They only saw a 31% reduction. This would have meant the study needed more patients and thus is underpowered. (post hoc power 41.4%).
  2. How much of the confidence interval is on the side of being beneficial? Most of the CI in this case is below 1.0 (0.41-1.03). Thus, I pay more attention to this than the p-value (0.07). There is potentially 59% reduction in the hazard of VTE and only a 3% potential increase in VTE. This is a clinically important reduction in VTE.
  3. What are the pros and cons of the therapy? Preventing VTE is important. The risk of bleeding was less in with tinzaparin. Had the bleeding been higher then I might have had different thoughts about prescribing this drug.
  4. Are the results of this trial consistent with previous studies? If so, then I fall back on it being underpowered and likely would prescribe the drug. A metaanalysis of 7 studies found a similar reduction in VTE (HR 0.47).

Thus, I think the study was underpowered for the event rates they encountered. Had there been more patients enrolled they likely would have found a statistically significant difference between groups. I would not anticipate the results shifting from benefit to harm with more patients. It is likely the patients in this trial were “healthier” than patients in the previous trials.  I feel comfortable saying tinzaparin is likely beneficial and I would feel comfortable prescribing it.

This demonstrates the importance of evaluating the confidence interval and not just the p-value. More information can be gleaned from the confidence interval than a p-value.

Do lipid guidelines need to change just because there is a new, expensive drug on the market? NO!

Shrank and colleagues published a viewpoint online today positing that lipid guidelines should return to LDL based targets. I think they are wrong. The use two studies to support their assertion.

First they use the IMPROVE IT study. In this study patients hospitalized for ACS were randomized to a  combination of simvastatin (40 mg) and ezetimibe (10 mg) or simvastatin (40 mg) and placebo (simvastatin monotherapy). The LDLs were already pretty low in this group: baseline LDL cholesterol levels had to be between 50 to 100 mg per deciliter  if they were receiving lipid-lowering therapy or 50 to 125 mg per deciliter if not on lipid lowering therapy (Average baseline LDL was 93.8 mg/dl). The results show minimal benefits as demonstrated below:

IMPROVE IT resultsCurrent guidelines would recommend high potency statin in this patient population. Adding ezetimibe to moderate dose statin is probably equivalent to a high potency statin (from a LDL lowering  perspective). This study (and all ezetimibe studies) should have tested the difference between simva 40-ezetimbe 10 and simva 80mg or atorvastatin 40 or 80mg. So to me IMPROVE IT doesn’t PROVE anything other than a more potent statin leads to less cardiovascular events…something we already know.

Now on to the 2nd argument. They argue that alirocumab (Praluent), the first in a new class, the proprotein convertase subtilisin/kexin type 9 (PCSK-9) inhibitors should lead to LDL guided therapy again. Why? “Early results suggest these drugs have a powerful effect on levels of low-density lipoprotein cholesterol (LDL-C), likely more potent than statins“. A systematic review of studies of this drug shows a mortality reduction but the comparators in these studies was placebo or ezetimibe 10mg. Why? We have proven therapy for LDL and this drug should have been compared to high potency statins. That study will likely not ever be done (unless the FDA demands it) because the companies making this drug cant risk finding that it works only as good as a high potency statin or possibly worse.  Also does this class of drugs have anti-inflammatory effects like statins? Are they safer? This is an injectable drug that has to be warmed to room temperature prior to use and is very costly compared to generic atorvastatin.

In my opinion, no guideline should be changed without appropriately designed outcomes studies for the drugs being recommended. In this case, the risk-benefit margin needs to be impressive to justify the cost as we have dirt cheap potent statins already.

The authors of this viewpoint make no great rational argument for guidelines change other than that there is a new drug on the market and it might work. Lets see if it does and at what cost (both monetary and physiological).

How to calculate patient-specific estimates of benefit and harm from a RCT

One of the more challenging concepts for students is how to apply information from a study to an individual patient. Students have been taught how to calculate a number needed to treat (NNT) but that isn’t often very useful for the current patient they are seeing. Usually our patients are sicker or healthier than those in the study we are reading. Studies include a range of patients so the effect we see in the results is the average effect for all patients in the study.

Imagine you are seeing Mr. Fick, a 70 yo M with ischemic cardiomyopathy (EF 20%) and refractory anemia (baseline Hg 7-10 mg/dl). He reports stable CHF symptoms of dyspnea walking around the house after about 30 ft. He reports other signs and symptoms of CHF are stable. Medications include lisinopril 20mg bid, aspirin daily, furosemide 80 mg daily, and iron tablets daily. He is not taking a beta blocker due to bradycardia and can’t take a statin due to myopathy. He has refused an ICD in the past. BP is 95/62 mm Hg, pulse is 50 bpm, weight is stable at 200 lbs. Labs done one week earlier show a stable Na 0f 125 mmol/l, K 3.8 mmol/l, Hg 8 g/dl, platelets 162 k, WBC is normal with 22% lymphs on differential, cholesterol is 220 mg/dl, and uric acid is 6.2.  Since he has severe CHF you are considering adding spironolactone to his regimen. he is concerned because he has a hard time tolerating medications. He wants to know how much it will help him. What do you tell him?

This figure is from the RALES trial, a study of spironolactone in patients with advanced CHF. Use the figure below to figure out Mr. Fick’s individual estimated risk of death if he agrees to take spironolactone.

RALES figure

There are 4 methods I will demonstrate to calculate a patient-specific estimate of effect from an RCT. First, think about what information you will need to estimate Mr. Fick’s specific benefits of spironolactone. You will need the NNT from the RALES trial and Mr. Fick’s estimated risk of death (we call this the PEER or the patient expected event rate). Where do we get the PEER of death for Mr. Fick? You use a validated prediction rule. I use Calculate by QxMD. Look in the Cardiology folder under heart failure and open the Seattle Heart Failure Model. Plug in Mr. Fick’s data and you get his 1 year expected risk of death (56%).

Method 1: Calculate patient-specific NNT using PEER: the formula for this is 1 / (PEER x RRR) where RRR is the relative risk reduction from the RALES trial (30%. To calculate that: 1-RR is the RRR). So plugging that in, Mr. Fick’s NNT is 1 / (0.56 x 0.3) = 6 (the NNT from the RALES trial is 9).

Method 2: Estimate patient-specific NNT using f: F is what I call the fudge factor. It is your guesstimation of how much higher or lower Mr. Fick’s risk of death is than that of the average patient in the study. If you say he is 2 times more likely to die then f is 2. If you think he is half as likely then f is 0.5. The way to use f is to divide the study NNT by f. This gives an estimate of Mr. Fick’s NNT. So lets just say Mr. Fick is twice as likely to die than those in the study. The NNT of the study is 9.  So 9/2 is 4.5 which I would round up to 5.

NNTs are nice but its hard to use them directly with a patient. The next 2 calculations are more useful for patients.

Method 3: use the RR  to calculate Mr. Fick’s actual risk of death: the RR of death in the RALES trial is 0.70. You multiply this by his estimated death rate and you get his expected death risk if he were on spironolactone instead of nothing. His risk of death is 56%. So 0.70 x 0.56 = 39%. So if Mr. Fick takes spironolactone I expect his risk of death to go from 56% down to 39%. That’s useful information to tell the patient.

Method 4: use the RRR to calculate Mr. Fick’s actual risk of death: This is similar to the concept above except that you have to remember that the RRR (relative risk reduction) is relative. So first you calculate how much risk is reduced by the treatment. The RRR is 30% (1-RR is RRR). Then I multiply this by the patient’s risk of death. 0.30 x 0.56 is 0.168. This 16.8% represents how much risk I have removed from the baseline risk. Now I have to subtract it from the baseline risk and I get his final risk. So 0.56-0.168=0.39 or 39%. Same number as method 3 and it has to give the same number because its just a different way of calculating the exact same thing.

I hope this is useful and now you can give patients some real numbers instead of just saying your risk is decreased by x%.

Remember you need: patients risk of the event without treatment (usually from a prediction rule or maybe the placebo event rate of the study or placebo rate of a subgroup) and event rates from the study. Then you can make all the calculations from there.

Overcoming Probability Inflation

Benjamin Roman, MD, MSPH wrote a wonderful piece in this week’s New England Journal of Medicine. It might not get read much because it is listed way down the table of contents but I think it is more clinically important than any other piece in the journal this week. He tells of his own story of having sudden sensorineural hearing loss and agreeing to an MRI even though the probability of him having a serious cause of the problem was low, the cost of the test (MRI) was high, and the benefit of treatment was minimal (in fact, many don’t need treatment). Furthermore, he is an ENT physician and knows all this but still underwent testing anyway- mainly because his wife wanted him to!

He outlines an important problem in medicine for both physicians and patients: probability inflation.

This problem arises from the way we deal emotionally (added for emphasis) with risk and uncertainty, which are givens in health care, and the way we make decisions in the face of low-probability outcomes.

Emotions are a large part of the problem; the affect heuristic. When we make decisions we often consider it analytically but also from the standpoint of how we feel about it. If we have positive feelings about the situation we magnify the probability of benefit or, conversely, reduce the magnitude of harm. Think about Dr. Roman’s situation. He (or at least his wife was) was worried about something bad happening (ie having an acoustic neuroma) but understood that was pretty unlikely to be the case. But what if he didn’t do the MRI and he actually had a treatable one that would be missed. He had strong feelings (or at least his wife did) that he didn’t want to miss the acoustic neuroma. Or maybe he would be relieved that he didn’t find one (that’s is a strong positive emotion isn’t it) if the MRI was negative (assuming the sensitivity is good enough). Thus, the acoustic neuroma’s probability becomes artificially inflated. He probably didn’t even think about the downstream effects of finding one and the risks associated with having surgery or radiation (which probably outweigh the benefits of finding it if I had to guess).

Many of us fear the uncertainty almost more than the disease itself. We want to know even if we can’t act on the information we are given. We also like doing something. At least we will go down fighting. This affects both physicians and patients. We order things we shouldn’t. Patients request things they shouldn’t. Sometimes its because of poor reasoning skills. The affect heuristic gets us. Sometimes its more practical as Dr. Roman notes:

My doctor’s recommendation was based on a similar reaction. Besides wanting to reassure himself and his patients that there is no acoustic neuroma, he told me, another reason he suggests MRIs in situations like mine is that he fears being sued should he fail to order one and end up missing something. He noted that court malpractice awards for missed acoustic neuromas commonly reach into the millions of dollars and that until we agree to an acceptable miss rate and physicians are no longer liable for missing just a single such case, their practices will not change. I’m not sure how common such verdicts are, but this rationale also reflects risk aversion in the face of a low-probability bad event — it’s simply the doctor’s risk that’s at issue, rather than the patient’s. (emphasis added)”

That last statement is telling. It’s a shame so much of medicine revolves around covering our proverbial asses.

Dr. Roman offers some solutions:

  1. comparative effectiveness and outcomes research (this exists for many things but gets ignored)
  2. educating doctors about how to discuss uncertainty, risk, and probability (First, doctors need to be taught these principles before they can teach anyone else. I see first hand on a daily basis how little of this is understood)
  3. addressing emotions and psychology of patients and physicians (good luck dealing with emotions….. anyone have a teenage child?)
  4. nudging each other to do the right thing
    • consumers share cost of things they want that are marginal (good idea for sure)
    • government (either local or national) regulation (Hell no! More bureaucracy is not needed and will only raise costs even more)

As Dr. Roman points out all of these need to be done but the devil is in the details. HOW? I think the focus of these solutions is from a society or community perspective and physicians mainly feel a duty to only one individual- the individual sitting in front of them. That relationship is powerful and affects decision making.

My dad had advanced dementia and fell in his bathroom suffering a tibial plateau fracture. The surgeon wanted to fix it surgically as this would give my dad the best chance to walk (though he couldn’t actually tell me the probability). The only other option was splinting and rehab.  Thankfully, I know enough about dementia and specifically my dad’s dementia to know he would never be able to participate in rehab and I knew he would never be able to keep the wound clean and stay off his leg until it healed. I decided not to do the surgery and opted for rehab and splinting. My dad never walked again. He couldn’t understand how to do rehab or to use a walker. I made the right decision because I think the ultimate outcome would have been the same either way- not walking. I have no way of knowing. It was a decision under uncertainty. I saved his insurance and Medicare a lot of money. That wasn’t my goal. My goal was to maximize outcomes in the most resource-sensitive way that would harm my dad the least. I felt surgery would be more harmful than not doing the surgery. Should the surgeon have even offered to do surgery? Should he have just said that splinting was the best for someone like my dad with advanced dementia? When he offered surgery did he really thing it would help or was it because he was a surgeon and that’s what they do?

Like all complex problems the solutions are equally if not more complex. I will continue to do my small part of educating who I can on EBM principles and hopefully a few of my learners will make good decisions.

 

Why do clinicians continue medications with questionable benefit in advanced dementia?

A recent study in JAMA Internal Medicine estimated the prevalence of medications with questionable benefit being used by nursing home residents with advanced dementia. This is an important question because significant healthcare resources are utilized in the last 6 months of life. Furthermore, if there is no benefit then the only possible outcomes can be excess cost with or without harm. As the authors note most patients at this stage just want comfort care and maximization of quality of life.

The researchers studied medication use deemed of questionable benefit by nursing home residents with advanced dementia using a nationwide long-term care pharmacy database.  A panel of geriatricians and palliative medicine physicians defined a list of medications that are of questionable benefit when the patient’s goal of care is comfort and included cholinesterase inhibitors, memantine hydrochloride, antiplatelets agents (except aspirin), lipid-lowering agents, sex hormones, hormone antagonists, leukotriene inhibitors, cytotoxic chemotherapy, and immunomodulators.

53.9% nursing home residents with advanced dementia were prescribed at least 1 questionably beneficial medication during the 90-day observation period with cholinesterase inhibitors (36.4%), memantine hydrochloride (25.2%), and lipid-lowering agents (22.4%) being most commonly prescribed. Patients residing in facilities with a high prevalence of feeding tubes were more likely to be prescribed these questionably beneficial medications.

Of only those residents who used at least 1 questionably beneficial medication, the mean (SD) 90-day drug expenditure was higher ($2317 [$1357]; IQR, $1377-$2968 compared to $1815 for all residents), of which 35.2% was attributable to medications of questionable benefit (mean [SD], $816 [$553]; IQR, $404-$1188).

Picture of elderly patient

I think this study demonstrates excess medication usage and excess costs in a population in which costs are already high and for which this added cost is of little benefit. So why are these medications continued in this population? Are physicians unaware of the lack of benefit of these medications in this population? Are they aware but worried that stopping them will make the patient worse? I suspect a little of both is the correct answer.

A major challenge for EBM is getting the E out there. Numerous resources are available but a major step in accessing a resource is recognizing a knowledge deficit. How do you know you don’t know something? Pushing evidence (for example by email) is useful for general knowledge but isn’t useful to answer specific questions. Most of the push email services require enrollment to get the emails and many clinicians probably don’t even know this exists. Maintenance of certification could be a useful tool to improve knowledge if it was designed properly but can’t cover everything for all clinicians.

So we have a dilemma. Studies like this show areas for improvements in knowledge and practice but there are no great practical ways to improve either in a nursing home setting. Clinical reminders still have to be acted upon. Payers could refuse to pay for certain services but docs will likely continue to order them with the patient picking up the bill. Protocols could be put in place but they have to be followed and agreed upon by clinicians. They all will have anecdotal evidence of grandpa getting worse when his cholinesterase inhibitor was stopped. And they will say “What’s the harm in continuing it?”.

Affect Heuristic,COI, or Lack of Knowledge? Why Do Cardiologists Overestimate Benefits of PCI in Stable Angina?

A recent study in JAMA Internal Medicine by Goff and colleagues made me wonder if the Cardiologists studied are uninformed of the limited benefits of stenting (PCI) for chronic stable angina, do they have too strong of a conflict of interest due to economic gain, or is the affect heuristic playing a big part? Probably  a mixture of them all. The COURAGE Trial taught us that PCI was better than medical therapy at reducing anginal symptoms but wasn’t any better for reducing MI and death.

Goff and colleagues reviewed 40 recordings of actual encounters of Cardiologists with patients being considered for cardiac catheterization and PCI. I am unsure if these were video recordings or audio recordings. As best I can tell these were all private practice Cardiologists.  Cardiologists either implicitly or explicitly overstated the benefits of angiography and PCI. They presented medical therapy as being inferior to angiography and PCI (a statement that defies the findings of the COURAGE trial). In fact, in only 2 of the encounters did they state PCI would not reduce the risk of death or MI. These Cardiologists also didn’t use good communications styles that encouraged patient participation in the decision making process.

Why might these Cardiologists do this? They could be uninformed of the limited benefits of PCI in stable angina, but I doubt it. COURAGE was a landmark publication in one of the world’s most prominent medical journals. I find it hard to believe that Cardiologists wouldn’t be aware of the results of this trial.

They certainly have a financial stake in their recommendations.  The image below shows that a diagnostic cath is reimbursed at approximately $9,000 while a PCI with DES is reimbursed at approximately $15,000. That has to have an impact on decision making. I don’t accuse these Cardiologists of doing a procedure only for money but subconsciously this is playing a role. Recommending medical therapy only gets you an office visit reimbursement (maybe $200 or so).

What about the affect heuristic? My colleague Bob Centor writes about this often in his blog. A heuristic is a quick little rule we use to make decisions. The affect heuristic is a particular rule we use that is based on our emotions about a topic. Do I like it? Do I hate it? How strongly do I feel about it? The affect heuristic leads to the answer to an easy question (How do I feel about something?) serving as the answer to a much harder question (What do I think about something?) Its not hard to imagine (and data in the Goff paper supports this) a Cardiologist feeling that PCI is beneficial and should be done. They are emotionally tied to angiography and PCI….they have seen patients “saved” because of this procedure.

So what can be done? The solution is harder than determining the problem (as is often the case). The easiest solution is for insurance companies to stop reimbursing for the procedure in stable angina unless patients have failed optimal medical therapy but this is draconian. I also worry that patients will then receive bills for unreimbursed  catheterization charges. I think using the technology that was used in this study combined with feedback could be useful but logistically impossible. I have always wondered why we don’t use secret shopper fake patients to evaluate physician skills and knowledge (of course the answer is a logistic one) instead of the MOC system. Just publishing a study doesn’t work if physicians don’t read or if they don’t use that study to answer a clinical question. Patient decision aids (like this excellent example) could be very useful but the physician would have to use the tool and many don’t even know they exist.

Some would argue EBM has failed again. A well done study was published and it hasn’t made a difference. The principles of EBM have not failed and in fact, if they were used, could limit the inappropriate use of PCI in stable angina patients. What has failed is the desire to learn and use these skills by the older Cardiologists in this study. Like many physicians, they rely on outdated knowledge and emotions or beliefs. As stated by Bob Centor in a post about the affect heuristicDecision making bodies have biases. Until they understand their biases, we will have the problem of unfortunate, unnecessary and potential dangerous unintended consequences“. In this case the Cardiologists are the decision making bodies and the unintended consequences are the MIs, strokes, renal injury, and death that can and do occur from cardiac catheterization.

Hopefully you are now aware of what the affect heuristic is and how it impacts decision making. Acknowledge it and separate your feelings about a topic from the data. Your patients will benefit.

Knowing In Medicine

Or perhaps a more apt title would be “Not Knowing that we Don’t Know in Medicine”. A colleague and I gave a lecture last week on EBM in the context of how do we know what to do in medicine. We pointed out that there are 4 general ways that we know what we know in medicine: authority, clinical experience, pathophysiological rationale, and systematic investigation. I serendipitously read an article late last week in a great new series in JAMA called JAMA Diagnostic Test Interpretation.  I thought about all the times I used ammonia levels incorrectly and all the times my colleagues and residents used ammonia incorrectly. Why? Was I too lazy to evaluate the literature in this area? Admittedly I hadn’t but it didn’t even occur to me to do so because during my training my senior residents and attendings told me to check ammonia levels in patients who we suspected had hepatic encephalopathy as it “would help make the diagnosis”. 20 years later an epiphany has made me realize I have been doing the wrong thing for all these years. I didn’t know jack about the limitations of ammonia in chronic liver disease.

Why did I trust authority? Was it because John Mellencamp sang “I fought authority and authority always wins” so I didn’t question what I was told to do?  I wonder how many other things like this I do with no idea that I am doing it all wrong. Likely a fair amount. The hard part is there isn’t time to go back and review literature on everything we do. Even if I only reviewed the information in a preappraised resource like Dynamed or UpToDate I wouldn’t have enough time as I can barely find time to keep up with newer things without having to go through all the things I “KNOW” already.

I hope articles like this simple review in JAMA will help educate us all. I hope other journals follow JAMA’s lead and make distilled evidence summaries that we can quickly digest to improve the cost-effective care that we provide. I hope each of us will occasionally, maybe just once a month, question how we “KNOW” something and if we can’t give a good enough answer that we will take the time to  find that answer. We likely will be surprised by how much we don’t know in medicine.