“The purpose of practice guidelines must be to develop the best possible recommendations from a body of evidence that may be contradictory or inadequate.”
While I agree that having recommendations come from an expert body is useful when there is inadequate or contradictory evidence I don’t think they should be labeled guidelines. A consensus statement is a more appropriate term. After all, if evidence is lacking or contradictory aren’t these experts just giving their opinion? Isn’t it possible that another group of experts would give a different opinion?
So don’t label it a guideline. That term has garnered reverence that was never intended. Guidelines become law almost. They are bastardized into punishing performance measures and become the cornerstone of legal argument. So, the term guideline should not be used lightly.
“…but those recommendations should always represent the best evidence and the best expert opinion currently available.”
NO! No expert opinion. Data is too open to interpretation. Humans filter information using prior knowledge, experience, and many heuristics (including, very importantly, the affect heuristic). A person’s specialty really influences how they interpret data. It’s one of the reasons it’s so important to have multidisciplinary panels so that conflicts and heuristics can be balanced. Unfortunately, most guideline panels are very homogeneous and conflicted.
I agree that we need unambiguous language in guidelines. They should only contain recommendations on things that have strong evidence that no one refutes. When they venture into the world of vagaries they become nothing more than opinion pieces.
First they use the IMPROVE IT study. In this study patients hospitalized for ACS were randomized to a combination of simvastatin (40 mg) and ezetimibe (10 mg) or simvastatin (40 mg) and placebo (simvastatin monotherapy). The LDLs were already pretty low in this group: baseline LDL cholesterol levels had to be between 50 to 100 mg per deciliter if they were receiving lipid-lowering therapy or 50 to 125 mg per deciliter if not on lipid lowering therapy (Average baseline LDL was 93.8 mg/dl). The results show minimal benefits as demonstrated below:
Current guidelines would recommend high potency statin in this patient population. Adding ezetimibe to moderate dose statin is probably equivalent to a high potency statin (from a LDL lowering perspective). This study (and all ezetimibe studies) should have tested the difference between simva 40-ezetimbe 10 and simva 80mg or atorvastatin 40 or 80mg. So to me IMPROVE IT doesn’t PROVE anything other than a more potent statin leads to less cardiovascular events…something we already know.
Now on to the 2nd argument. They argue that alirocumab (Praluent), the first in a new class, the proprotein convertase subtilisin/kexin type 9 (PCSK-9) inhibitors should lead to LDL guided therapy again. Why? “Early results suggest these drugs have a powerful effect on levels of low-density lipoprotein cholesterol (LDL-C), likely more potent than statins“. A systematic review of studies of this drug shows a mortality reduction but the comparators in these studies was placebo or ezetimibe 10mg. Why? We have proven therapy for LDL and this drug should have been compared to high potency statins. That study will likely not ever be done (unless the FDA demands it) because the companies making this drug cant risk finding that it works only as good as a high potency statin or possibly worse. Also does this class of drugs have anti-inflammatory effects like statins? Are they safer? This is an injectable drug that has to be warmed to room temperature prior to use and is very costly compared to generic atorvastatin.
In my opinion, no guideline should be changed without appropriately designed outcomes studies for the drugs being recommended. In this case, the risk-benefit margin needs to be impressive to justify the cost as we have dirt cheap potent statins already.
The authors of this viewpoint make no great rational argument for guidelines change other than that there is a new drug on the market and it might work. Lets see if it does and at what cost (both monetary and physiological).
This week JAMA Internal Medicine published a research letter reporting data on the underrepresentation of women, elderly patients, and racial minorities in RCTs used to inform cardiovascular guidelines. The authors state that RCTs are considered to be the highest level of evidence that should be used to inform guideline development. I would argue systematic reviews would even be better but I understand that questions to be addressed in guidelines often need individual RCTs to answer them. They then state that “RCTs can have limited external validity”. What do you think?
The authors evaluated all references and then focused on RCTs that were cited in the ACC/AHA guidelines on atrial fibrillation, heart failure, and acute coronary syndromes. They extracted data on age, gender, ethnicity, and continents from which subjects were recruited. What did they find?
Female representation was highest in RCTs in atrial fibrillation (33%) followed by ACS (29%) and heart failure (29%). The next question you should ask is how does this compare to the actual gender representations of people affected by these diseases? In US registries of atrial fibrillation women make up 55% of patients, 42% in ACS registries, and 47% in heart failure registries. Thus women are underrepresented by up to 22% in these studies but does this affect guideline recommendations? Another way to think about this is would more data change recommendations for women? Hard to know for sure but I suspect not. If enrollment is properly conducted I would think that those enrolled would be a sample of all women with atrial fibrillation, ACS and heart failure. Even though the sampling fraction is smaller as long as they are representative of all women with those problems there should be no bias. The statistical inferences could be affected due to small sample sizes though but the overall qualitative findings (ie benefit or harm) should not be affected.
As expected the majority of patients enrolled in these studies were white. Black patients constituted 19% of heart failure RCT patients and 6% of both afib and ACS patients. In US registries of heart failure, afib and ACS black patients make up 6%, 21%, and 11% respectively. Again I don’t have a problem with this if sampling was done properly.
Elderly (defined as those >75 yrs of age) are very underrepresented constituting only 2% of patients in all the RCTs combined. In this case guideline developers will have to rely on observational data or use expert opinion to inform recommendations.
Finally, the authors point out that 94% of enrolled patients came from North America or Europe. Is this a problem? I don’t think so for the US as ACC/AHA guidelines are developed to guide treatment of American patients. Patients from other underrepresented continents will have less direct evidence informing recommendations on their care. Consequently, those recommendations will be based more on expert opinion.
I am preparing for a talk on the controversy surrounding JNC-8 and came across a post on KevinMD.com by an author of a Cochrane systematic review that aimed to quantify the effects of antihypertensive drug therapy on mortality and morbidity in adults with mild hypertension (systolic blood pressure (BP) 140-159 mmHg and/or diastolic BP 90-99 mmHg) and without cardiovascular disease. This is an important endeavor because the majority of people we consider treating for mild hypertension have no underlying cardiovascular disease.
David Cundiff, MD in his KevinMD.com post made this statement:
The JNC-8 authors simply ignored a systematic review that I co-authored in the Cochrane Database of Systematic Reviews that found no evidence supporting drug treatment for patients of any age with mild hypertension (SBP: 140-159 and/or DBP 90-99) and no previous cardiovascular disease, diabetes, or renal disease (i.e., low risk).
Let’s see if you agree with his assessment of the findings of his systematic review.
As is typical for a Cochrane review the methods are impeccable so we don’t need to critically appraise the review and can review the results. The following images are figures from the review. Examine them and then I will discuss my take on the results.
Coronary Heart Disease results
If you just look at the summary point estimates (black diamonds) you would conclude the treatment of mild hypertension in adults without cardiovascular disease has no effect on mortality, stroke and coronary heart disease but greatly increases withdrawal from the study due to adverse effects. But you are a smarter audience than this. The real crux is in the studies listed and examination of the confidence intervals.
Lets examine stroke closely. 3 studies were included that examined the treatment of mild hypertension on stroke outcomes. Two of the studies had no stroke outcomes at all. The majority of the data came from one study. The point estimate of effect was in fact a reduction of stroke by 49% but the confidence interval included 1.0 so not statistically significant. But the confidence interval ranged from 0.24-1.08- a potential 76% reduction in stroke up to an 8% increase. I would argue that a clinically important effect (stroke reduction) is very possible and had the studies been higher powered we would have seen a statistically significant reduction also. I think to suggest no effect on stroke is misleading. The same can be said for mortality.
Finally, what about withdrawals due to adverse effects. Only 1 study provided any data. It has an impressive risk ratio of 4.80 (almost 5 fold increased risk of stopping the drugs due to adverse effects). But the absolute risk increase is only 9% (NNH 11). We are not told what these adverse effects are to know if they were clinically worrisome or just nuisances for patients.
So, I don’t agree with Dr. Cundiff’s assessment that there is no evidence supporting treatment. I think the evidence is weak but there is no strong evidence to say we shouldn’t treat mild hypertension. The confidence intervals include clinically important benefits to patients. More studies are needed but will not be forthcoming. Observational data supports treating this group of patients and may have to be relied upon in making clinical recommendations.
I am a member of a guideline panel (sponsoring organization not to be named) on screening for prostate cancer. I am the only primary care/internist on the panel (as best I can tell). The majority are urologists. Recently a revised guideline manuscript was sent around for comment. From my biased point of view (my intellectual conflict of interest) I was against several of the recommendations not only because the evidence didn’t strongly support it but also because I have to deal with the unhappy patients who undergo prostate cancer screening and are found to have something and ultimately get a procedure that worsens their quality of life. The urologists just couldn’t understand how I could be against screening all men and getting a baseline PSA at age 40. At one point I was referred to as “pathetic” that I would have such thoughts and teach my residents to follow the UPSTF recommendation against screening for prostate cancer in average risk men. Even the American Urological Association takes the stance to participate in shared decision making with men about prostate cancer screening.
So why all the push back from my urological colleagues? The easy answer is financial. They make money from prostate biopsies and the surgical and hormonal treatment of prostate cancer. But I think it goes deeper than that especially since many are academic urologists and probably don’t have as great a financial incentive to evaluate and treat more prostate cancer (though I could be wrong). I think their intellectual conflict of interest is the main problem. Their research and academic beliefs are so strong that prostate cancer screening is good that they can’t see anyone else’s point of view (or view those views are equally meritorious). They can’t understand how I give greater value to the risk side of the risk/benefit equation. At least financial conflicts of interest are visible (when disclosed) and understandable. Intellectual conflicts of interest are usually subconscious and hard to overcome as I have found out. It will be an interesting face-to-face meeting this fall when we get together for another update.
I gave a CME seminar this week on treating hypertension in the elderly and after my presentation a clinical pharmacist asked me an interesting question: “What do you follow? JNC 7 or JNC 8?”.
I thought this was an interesting question and one I hadn’t thought about at all. After all shouldn’t an updated guideline trump the previous one? I like JNC 8 because its methodology is more explicit and consistent with IOM principles than JNC 7. One can argue with some of the decisions made about the evidence review (ie that they only included RCTs and ignored systematic reviews and observational data) and be concerned about the degree of conflicts of interest of the panel members. But what JCN 8 did was make life simpler in that the BP goals are easily remembered: <150/90 for those over 60 yrs of age and < 140/90 for everyone else including those with diabetes or CKD (regardless of age). So for these reasons I prefer JNC 8. Is it perfect? No but I suspect they will address many of the concerns critics have expressed and further questions that need to be addressed in future updates (that they promise will come in a timely fashion).
Dr. Peter Pronovost recently penned a viewpoint piece for JAMA about how guidelines can be better implemented. He is well respected in the patient safety realm and clearly feels guidelines are a major way to improve patient safety. I agree that they are a piece of the puzzle. What I wanted to do in this post is critique his thoughts on enhancing guideline use by physicians. I think some of what he proposes is unrealistic at best and most likely impossible.
Let’s take one step back to look at barriers to guideline implementation that were identified in an important review by Cabana and colleagues in 1999. They performed a broad systematic review of 120 different surveys investigating 293 potential barriers to physician guideline adherence. The figure below outlines what was found.
Barriers to Guideline Adherence
With this background let’s analyze Dr. Pronovost’s 5 strategies to increased guideline adherence.
Guidelines should include an “unambiguous checklist with interventions linked in time and space“. He recommends key evidence-based practices. I concur with this recommendation. Checklists are something physicians can do and they have been shown to reduce harmful events. Also this would be behaviorally based and specific (i.e. something measurable). An example might be an item on a checklist to make sure that each day an assessment is made in the chart for the need for continued bladder catheterization. What I worry about is that too many recommendations are made in many guidelines. Recommendations need to be prioritized and limited to those things that really make a difference. The checklists could become so burdensome that they impair patient care as too much time will be spent checking off the check list and not enough time actually caring for the patient.
Guideline developers should “help clinicians identify and mitigate barriers to guideline use and share successful implementation strategies“. Here is the impossible. While I agree in principle with this recommendation it can’t be implemented. Barriers are a local phenomenon; often a hyperlocal phenomenon. My hospital has 3 separate primary care practices (1 a resident practice and 2 fulltime provider practices) with very different types of docs and practice patterns. What will work in one of these clinics won’t work in the other. What works for one practitioner might not work with any other. Large centralized guideline developers just can’t be expected to develop solutions to barriers.
Guideline developers could “collaborate to integrate guidelines for conditions that commonly coexist“. Most guidelines are single disease guidelines developed by single specialty groups. Patients are multimorbid. This is a great recommendation that will be tough to implement but that I agree with 100%. Diseases and their treatments interact with each other. Guideline developers often ignore these interactions. At best they will discuss some exceptions to the guidelines for other comorbidities but this isn’t enough. There are enough diabetics with hypertension and coronary artery disease to warrant a guideline on them. What about hypertension with renal disease? The combinations would have to be carefully thought out and the panels multidisciplinary with primary care physicians playing the prominent leadership role.
Rely on systems rather than the actions of individual clinicians. Bravo. Many things are not totally under the physician’s control and there are often too many things to think about now a days for 1 person. Systems need to be engineered to deal with the mundane things we physicians don’t like to deal with (like elevating head of the bed in a ventilated patient. We would rather deal with managing the ventilator). Multidisciplinary teams at each care site would need to be put together to design the processes of care.
Create transdisciplinary teams to develop scholarly guidelines with practice strategies. Not much detail is given in the manuscript about this but what I think he means could be twofold: 1) teams of clinicians, epidemiologists, implementation scientists and systems engineers would develop the guideline and 2) these same types of teams would study best practices for implementing them. Currently many guidelines don’t include implementation scientists nor systems engineers. It’s no wonder we have a hard time implementing guidelines with implementation isn’t really built into them from the start.
Much of this is already known but important to keep saying. Someday guideline developers and policy wonks will listen. Just shoving a guideline in our face isn’t the way to go. Currently reminders and “performance measures” are the main ways guidelines are being implemented. We will see if medical systems develop smart ways to use electronic health records to better implement guidelines.
It is common for multiple guidelines to be made by different developers on the same topic. Problems arise though when different guidelines make differing recommendations. Which one should you believe?
The guideline development process is complex. The Institute of Medicine published a framework for trust worthy guideline development as have other groups. Guidelines could differ by performing any of the steps along the development pathway differently from each other. A lot of judgment and decision-making goes into developing guidelines explaining why each of the steps could be performed differently.
What are some of the main reasons why different guidelines might have different recommendations?
They attempt to provide guidance on different clinical questions. The guidelines just address different things even though they seem similar. Obviously you would choose the one that best fits the clinical scenario.
Different evidence bases were used to make recommendations. It could be that one guideline is newer than another and contains more updated evidence. What is more problematic is when guidelines are released at about the same time but have differing evidence bases. As I mentioned earlier a lot of judgments and decisions are made during the guideline development process. An important one is which studies to include/exclude from the evidence review. This is more subjective than you might realize. It’s easy to develop study selection criteria to include only that evidence supporting your point of view while excluding that which doesn’t. Pick the guideline with the most comprehensive literature search. Also make sure the exclusion criteria make sense and don’t just help to support a biased point of view.
Different outcomes were considered. Recommendations are made to improve care with the hope of improving some outcome. It could be that one guideline focused on a surrogate marker (e.g. LDL levels) while another focused on hard clinical outcomes (MI and stroke event rates). Go with the one focused on hard clinical outcomes.
Values, biases and conflicts of interest of the guideline developers is probably the main reason for disparate recommendations. Almost every guideline panel is biased in some way. The best you can hope for is that multiple biases and conflicts of interest are represented and essentially cancel each other out (by assembling a multidisciplinary panel to develop the guideline). Just disclosing conflicts of interest does nothing to lessen their impact. Moving from evidence to recommendations involves value judgments. The recommendations in the guideline are shaped by these values. Different groups will weigh the benefits and harms differently even if the same exact evidence base is reviewed. One need look no further than breast cancer screening guidelines to find very different values structures between cancer organizations and the USPSTF. Unfortunately the values structure of the panel is usually not explicitly stated and one must infer it from the recommendations that are made. You should choose the guideline that best matches the values of the patient.
I tried to give a very basic overview of why guidelines can differ. There are other reasons but these are the main ones. Clinicians should look for and use guidelines that are trustworthy. They should not follow recommendations uncritically but seek to understand what values and judgments shaped the recommendations.
In noon conference today I reviewed the good, the bad, and the ugly of the recently released ACC/AHA cholesterol treatment guidelines. Below is a YouTube video review of the guidelines. It will be interesting to see how cholesterol management evolves over the next few years. There are groups like the National Lipid Association who feel that removing the LDL goals from the new guideline was a mistake. Likewise, the European Society of Cardiology lipid guidelines recommend titrating statins to LDL targets. Conflicting guidelines are always a problem. I will address conflicting guidelines in my next post and what to think about when you see conflicting recommendations on seemingly the same topic.
Interested in teaching and learning principles and ideas of community involvement, sustainability, equity, technology and engagement. Looking at new ways to innovate and to hack the curriculum. All views my own.