Why do clinicians continue medications with questionable benefit in advanced dementia?

A recent study in JAMA Internal Medicine estimated the prevalence of medications with questionable benefit being used by nursing home residents with advanced dementia. This is an important question because significant healthcare resources are utilized in the last 6 months of life. Furthermore, if there is no benefit then the only possible outcomes can be excess cost with or without harm. As the authors note most patients at this stage just want comfort care and maximization of quality of life.

The researchers studied medication use deemed of questionable benefit by nursing home residents with advanced dementia using a nationwide long-term care pharmacy database.  A panel of geriatricians and palliative medicine physicians defined a list of medications that are of questionable benefit when the patient’s goal of care is comfort and included cholinesterase inhibitors, memantine hydrochloride, antiplatelets agents (except aspirin), lipid-lowering agents, sex hormones, hormone antagonists, leukotriene inhibitors, cytotoxic chemotherapy, and immunomodulators.

53.9% nursing home residents with advanced dementia were prescribed at least 1 questionably beneficial medication during the 90-day observation period with cholinesterase inhibitors (36.4%), memantine hydrochloride (25.2%), and lipid-lowering agents (22.4%) being most commonly prescribed. Patients residing in facilities with a high prevalence of feeding tubes were more likely to be prescribed these questionably beneficial medications.

Of only those residents who used at least 1 questionably beneficial medication, the mean (SD) 90-day drug expenditure was higher ($2317 [$1357]; IQR, $1377-$2968 compared to $1815 for all residents), of which 35.2% was attributable to medications of questionable benefit (mean [SD], $816 [$553]; IQR, $404-$1188).

Picture of elderly patient

I think this study demonstrates excess medication usage and excess costs in a population in which costs are already high and for which this added cost is of little benefit. So why are these medications continued in this population? Are physicians unaware of the lack of benefit of these medications in this population? Are they aware but worried that stopping them will make the patient worse? I suspect a little of both is the correct answer.

A major challenge for EBM is getting the E out there. Numerous resources are available but a major step in accessing a resource is recognizing a knowledge deficit. How do you know you don’t know something? Pushing evidence (for example by email) is useful for general knowledge but isn’t useful to answer specific questions. Most of the push email services require enrollment to get the emails and many clinicians probably don’t even know this exists. Maintenance of certification could be a useful tool to improve knowledge if it was designed properly but can’t cover everything for all clinicians.

So we have a dilemma. Studies like this show areas for improvements in knowledge and practice but there are no great practical ways to improve either in a nursing home setting. Clinical reminders still have to be acted upon. Payers could refuse to pay for certain services but docs will likely continue to order them with the patient picking up the bill. Protocols could be put in place but they have to be followed and agreed upon by clinicians. They all will have anecdotal evidence of grandpa getting worse when his cholinesterase inhibitor was stopped. And they will say “What’s the harm in continuing it?”.

Affect Heuristic,COI, or Lack of Knowledge? Why Do Cardiologists Overestimate Benefits of PCI in Stable Angina?

A recent study in JAMA Internal Medicine by Goff and colleagues made me wonder if the Cardiologists studied are uninformed of the limited benefits of stenting (PCI) for chronic stable angina, do they have too strong of a conflict of interest due to economic gain, or is the affect heuristic playing a big part? Probably  a mixture of them all. The COURAGE Trial taught us that PCI was better than medical therapy at reducing anginal symptoms but wasn’t any better for reducing MI and death.

Goff and colleagues reviewed 40 recordings of actual encounters of Cardiologists with patients being considered for cardiac catheterization and PCI. I am unsure if these were video recordings or audio recordings. As best I can tell these were all private practice Cardiologists.  Cardiologists either implicitly or explicitly overstated the benefits of angiography and PCI. They presented medical therapy as being inferior to angiography and PCI (a statement that defies the findings of the COURAGE trial). In fact, in only 2 of the encounters did they state PCI would not reduce the risk of death or MI. These Cardiologists also didn’t use good communications styles that encouraged patient participation in the decision making process.

Why might these Cardiologists do this? They could be uninformed of the limited benefits of PCI in stable angina, but I doubt it. COURAGE was a landmark publication in one of the world’s most prominent medical journals. I find it hard to believe that Cardiologists wouldn’t be aware of the results of this trial.

They certainly have a financial stake in their recommendations.  The image below shows that a diagnostic cath is reimbursed at approximately $9,000 while a PCI with DES is reimbursed at approximately $15,000. That has to have an impact on decision making. I don’t accuse these Cardiologists of doing a procedure only for money but subconsciously this is playing a role. Recommending medical therapy only gets you an office visit reimbursement (maybe $200 or so).

What about the affect heuristic? My colleague Bob Centor writes about this often in his blog. A heuristic is a quick little rule we use to make decisions. The affect heuristic is a particular rule we use that is based on our emotions about a topic. Do I like it? Do I hate it? How strongly do I feel about it? The affect heuristic leads to the answer to an easy question (How do I feel about something?) serving as the answer to a much harder question (What do I think about something?) Its not hard to imagine (and data in the Goff paper supports this) a Cardiologist feeling that PCI is beneficial and should be done. They are emotionally tied to angiography and PCI….they have seen patients “saved” because of this procedure.

So what can be done? The solution is harder than determining the problem (as is often the case). The easiest solution is for insurance companies to stop reimbursing for the procedure in stable angina unless patients have failed optimal medical therapy but this is draconian. I also worry that patients will then receive bills for unreimbursed  catheterization charges. I think using the technology that was used in this study combined with feedback could be useful but logistically impossible. I have always wondered why we don’t use secret shopper fake patients to evaluate physician skills and knowledge (of course the answer is a logistic one) instead of the MOC system. Just publishing a study doesn’t work if physicians don’t read or if they don’t use that study to answer a clinical question. Patient decision aids (like this excellent example) could be very useful but the physician would have to use the tool and many don’t even know they exist.

Some would argue EBM has failed again. A well done study was published and it hasn’t made a difference. The principles of EBM have not failed and in fact, if they were used, could limit the inappropriate use of PCI in stable angina patients. What has failed is the desire to learn and use these skills by the older Cardiologists in this study. Like many physicians, they rely on outdated knowledge and emotions or beliefs. As stated by Bob Centor in a post about the affect heuristicDecision making bodies have biases. Until they understand their biases, we will have the problem of unfortunate, unnecessary and potential dangerous unintended consequences“. In this case the Cardiologists are the decision making bodies and the unintended consequences are the MIs, strokes, renal injury, and death that can and do occur from cardiac catheterization.

Hopefully you are now aware of what the affect heuristic is and how it impacts decision making. Acknowledge it and separate your feelings about a topic from the data. Your patients will benefit.

Knowing In Medicine

Or perhaps a more apt title would be “Not Knowing that we Don’t Know in Medicine”. A colleague and I gave a lecture last week on EBM in the context of how do we know what to do in medicine. We pointed out that there are 4 general ways that we know what we know in medicine: authority, clinical experience, pathophysiological rationale, and systematic investigation. I serendipitously read an article late last week in a great new series in JAMA called JAMA Diagnostic Test Interpretation.  I thought about all the times I used ammonia levels incorrectly and all the times my colleagues and residents used ammonia incorrectly. Why? Was I too lazy to evaluate the literature in this area? Admittedly I hadn’t but it didn’t even occur to me to do so because during my training my senior residents and attendings told me to check ammonia levels in patients who we suspected had hepatic encephalopathy as it “would help make the diagnosis”. 20 years later an epiphany has made me realize I have been doing the wrong thing for all these years. I didn’t know jack about the limitations of ammonia in chronic liver disease.

Why did I trust authority? Was it because John Mellencamp sang “I fought authority and authority always wins” so I didn’t question what I was told to do?  I wonder how many other things like this I do with no idea that I am doing it all wrong. Likely a fair amount. The hard part is there isn’t time to go back and review literature on everything we do. Even if I only reviewed the information in a preappraised resource like Dynamed or UpToDate I wouldn’t have enough time as I can barely find time to keep up with newer things without having to go through all the things I “KNOW” already.

I hope articles like this simple review in JAMA will help educate us all. I hope other journals follow JAMA’s lead and make distilled evidence summaries that we can quickly digest to improve the cost-effective care that we provide. I hope each of us will occasionally, maybe just once a month, question how we “KNOW” something and if we can’t give a good enough answer that we will take the time to  find that answer. We likely will be surprised by how much we don’t know in medicine.

Evidence Based Medicine Is Not In Crisis! Part 4

I’ve left the hardest issue to deal with for last- “Overemphasis on following algorithmic rules”.  This has been the most frustrating aspect of my primary care practice.  Patients quit being viewed as patients but a set of goals that I had to achieve to be smiled upon fondly by my boss as being “a good doctor”. It took me some time to finally quit playing the game and just do the best I could do and whatever the numbers were so be it.

Algorithmic medicine couldn’t be any more antithetical to EBM. Everyone is viewed the same. EBM clearly, as I have argued in the last three posts, is about individual patient values and circumstances. It’s about clinical experience temporizing what we could do to what we should do.  Algorithmic medicine allows no individuality.  No temporizing. Thus to claim EBM is in crisis because of algorithmic medicine is wrong. True EBM protects us from the harms of algorithmic medicine.

Interestingly computerized decision support systems (mentioned as a culprit in the first sentence of this section of Greenhalgh’s paper) are at the top of Haynes’ 6S hierarchy of preappraised evidence.

In these computerized decision support systems (CDSSs), detailed individual patient data are entered into a computer program and matched to programs or algorithms in a computerized knowledge base, resulting in the generation of patient-specific assessments or recommendations for clinicians” –  Brian Haynes

At the VA we have a moderately sophisticated CDSS. It warns me if my patient with heart failure is not taking an ACE inhibitor and its smart enough that if I enter an allergy to ACE inhibitors it won’t prompt me to order one. If I tell it that a patient has limited life expectancy it will not prompt me to pursue certain routine health screenings. Thus, I don’t view CDSSs as problematic in and of themselves. The problem arises when physicians don’t consider the whole patient (remember those values and clinical circumstances) in deciding whether or not to follow prompted recommendations.

Greenhalgh has made great points about what happens when good ideas are hijacked and distorted for secondary gain but EBM is not to blame. Victor Montori (@VMontori) said it best in a Tweet to me:

EBM principles are not in crisis, but corruption of healthcare has oft hidden behind the e-b moniker. EBM helps uncover it“.

 

Evidence Based Medicine Is Not In Crisis! Part 3

In this installation I want to jump ahead in Greenhalgh’s paper to address her last cause of the EBM crisis: “Poor fit for multimorbidity“. Not to worry, I will come back in a future post to cover the remaining “problems” of EBM.

I concur with Greenhalgh that individual studies have limited applicability by themselves in a vacuum to patients with multimorbidity. Guidelines don’t help a they also tend to be single disease focused and developed by single disease -ologists. So is EBM at fault here again? Of course not. EBM skills to the rescue.

The current model of EBM demonstrated below contains 2 important elements: clinical state and circumstances and clinical experience.

Clinical state and circumstances largely refers to the patient’s comorbidities, various other treatments they are receiving, and the clinical setting in which the patient is being seen. Thus, the EBM paradigm is specifically designed to deal with multimorbidity. Clinical expertise is used to discern what impact other comorbidities have on the current clinical question under consideration. and, along with the clinical state/circumstance, helps us decide how to apply a narrowly focused study or guideline in a multimorbid patient. Is this ideal? No. It would be nice if we had studies that included patients with multiple common diseases but we have to treat patients with the best available evidence that we have.

 

 

evidence based medicine is not in crisis! part 2

Greenhalgh and colleagues report that the “second aspect of evidence based medicine’s crisis… is the sheer volume of evidence available”. EBM is not the purveyor of what is studied and published. EBM is a set of skills to effectively locate, evaluate, and apply the best available evidence. For much of what we do there is actually a paucity of research data answering clinically relevant questions (despite there being alot of studies- which gets back to her first complaint about distortion of the evidence brand. See part 1 of this series). I teach my students and housestaff to follow the Haynes’ 6S hierarchy when trying to answer clinical questions. As much of the hierarchy is preappraised literature someone else has had to deal with the “sheer volume of evidence”. Many clinical questions can be answered at the top of the pyramid.

I concur with Greenhalgh that guidelines are out of control. I have written on this previously. We don’t need multiple guidelines on the same topic, often with conflicting recommendations. I believe that we would be better off with central control of guideline development under the auspices of an agency like AHRQ or the Institute of Medicine. It would be much easier to produce trustworthy guidelines and guidelines on topics for which we truly need guidance. (Really American Academy of Otolaryngology….do we need a guideline on ear wax removal?) It can be done.  AHCPR previously made great guidelines on important topics. Unfortunately we will probably never go back to the good ole days. Guidelines are big business now, with specialty societies staking out their territory and government and companies bastardizing them into myriad performance measures.

 

 

Evidence Based Medicine Is Not In Crisis! Part 1

Trisha Greenhalgh and colleagues wrote an opinion piece in BMJ recently lamenting (or perhaps exalting) that the EBM movement is in crisis for a variety of reasons. I don’t agree with some of the paper and I will outline in a series of posts why I disagree.

When most people complain about EBM or discuss its shortcomings they usually are not basing their arguments on the current definition of EBM.  They use the original definition of EBM in which EBM was defined as the conscientious, explicit, and judicious use of current best evidence in making decisions about the care of individual patients. This definition evolved to “the integration of best research evidence with clinical expertise and patient values. Our model acknowledges that patients’ preferences rather than clinicians’ preferences should be considered first whenever it is possible to do so“.

The circles in this diagram are ordered based on importance- with patient preferences and actions being most important and research evidence being the least important when practicing EBM. You can see that clinical expertise is used to tie it all together and decide on what should be done, not what could be done.

Back to the Greenhalgh paper. Her first argument  is that there has been distortion of the evidence brand. I agree. It seems everyone wants to add the “evidence based” moniker to their product. But she argues beyond just a labeling problem. She argues that the drug and medical device industry is determining our knowledge because they fund so many studies. Is this the fault of EBM? Or should funding agencies like the NIH and regulatory agencies like the FDA be to blame? I think the latter. Industry will always be the main funder of studying their product and they should be. They should bear the cost of getting product to market. That is their focus. To suggest they shouldn’t want to make profit is just ridiculous.

The problem arises in what the FDA (and equivalent agencies in other countries) allows pharma to do. Greenhalgh points out the gamesmanship that pharma plays when studying their drug to get the outcomes they desire. I totally agree with what she points out. Ample research proves her points. But it’s not EBM’s fault. The FDA should demand properly conducted trials with hard clinical outcomes be the standard for drug approval. Companies would do this if they had to to get drug to the market. I also blame journal editors who publish these subpar studies. Why do they? To keep advertising dollars? The FDA should also demand that any study done on a drug be registered and be freely available and published somewhere easily accessible (maybe clinical trials.gov). Those with adequate clinical and EBM skills should be able to detect when pharma is manipulating drug dosages, using surrogate endpoints, or overpowering a trial to detect clinically insignificant results. I look at this as a positive for continuing to train medical students and doctors in these skills.

Research has shown that industry funded studies overestimate the benefits of their drugs by maybe 20-30%. A simple way to deal with this is to take any result from an industry funded study and to reduce it by 20-30%. If the findings remain clinically meaningful then use the drug or device.

I agree with Greenhalgh that current methods to assess study biases are outdated. The Users’ Guides served their purpose but need to be redone to detect the subtle gamesmanship going on in studies. Future and current clinicians need to be trained to detect these subtle biases. Alternatively, why can’t journals  have commentaries about every article similar to what BMJ Evidence Based Medicine and ACP Journal Club do. This could then be used to educate journal users on these issues and put the results of studies into perspective.