It’s a Sham It Doesn’t Work: Arthroscopic Meniscal Repair

The orthopedic surgeons won’t be happy about this at all. The most common orthopedic procedure is a sham….or should I say no better than a sham surgery. A study published in the New England Journal of Medicine on December 26th should change the current management of meniscal tear.

Sihvonen and colleagues randomized 70 Finnish patients without knee osteoarthritis to arthroscopic partial meniscectomy and 76 patients to a sham operation. These patients had failed at least 3 months of conventional conservative treatment. Patients with traumatic onset of symptoms or with osteoarthritis were excluded.  The authors did something very interesting which both allowed them to more easily perform a sham operation and to get around the ethics of a sham operation: they did a diagnostic arthroscopy and it was during this diagnostic arthroscopy that the patient was randomized. At this point either a standard meniscectomy was performed or an elaborate sham. Everything was done the same postoperatively as far as wound care, rehab instructions, etc. The patients, those who determined the outcomes, and  those who collected and analyzed the data were all blinded to study group assignment. The main outcome measure was knee pain after exercise (at 2, 6, and 12months) using a validated scale and a validated meniscus specific quality of life instrument. They also asked patients at 12 months if they would be operated on again and if they had figured out which arm of the trial they were in.

Prior to looking at the results of this study we need to make sure the study is scientifically valid. Therapeutic studies should meet the following criteria:

  1. Were participants randomized? YES
  2. Was random allocation method concealed? YES the authors used opaque envelopes
  3. Was intention to treat analysis used? YES
  4. Was the groups similar at the start of the study? YES
  5. Was blinding adequate? YES (only the operating room staff weren’t blinded but they didn’t participate in outcome determination)
  6. Were the groups treated equally apart from the intervention? YES
  7. Was follow-up sufficiently long and complete? YES

So I think this study is low risk for bias and I can now move to the results.

figure

This figure shows that the 2 treatment arms had the same effect on the validated measures and the knee pain likert scale. 83% of sham surgery patients reported improvement in knee pain compared to 88.6% of meniscectomy patients. Furthermore, 96 and 93% of patients respectively, reported they would repeat the same procedure they had undergone. 5% of sham surgery patients underwent additional arthroscopy compared to 1.4% of meniscectomy patients (p=ns).

While this study is small it was adequately powered for their outcomes. One could argue the ethics of a sham operation but this methodology is the only way to truly determine if some procedures “work”. Even procedures that don’t really work can have a significant placebo effect and a sham study is a powerful way to control for that effect.

It’s important to point out that these results only apply to patients with non-traumatic degenerative meniscal tears. Interestingly, the authors did a post-hoc subgroup analysis showing that patients with sudden onset of symptoms didn’t have any different outcomes with meniscectomy over sham surgery. There are major limitations with any post-hoc analysis but suggests that now a study needs to be done in a similar fashion in traumatic injury patients.

What’s The Evidence For That? A new series I am starting

One thing I have noticed is that current residents don’t seem to know the evidence supporting a lot of the treatments they use, especially if the studies were done before they started med school.  I also don’t think they really want to read articles from the past when there is so many new things that excite them more. So I came up with a new series of data summaries I am going to make for teaching purposes during rounds to remind the residents and students that there is evidence behind some of what we do.  I designed them to be one pagers and answer what I feel are the key questions on the particular topic I am covering. I also try to follow the Hayne’s 6 S Hierarchy and focus on evidence higher up the pyramid (that isn’t UpToDate or Dynamed). I want to hit the less sexy topics that we encounter a lot on the inpatient medicine service like COPD exacerbation, hepatic encephalopathy, etc.

So here’s the first one I made: What’s The Evidence For That: Steroids for COPD Exacerbation. (Steroids for AECOPD) This took about 1.5-2 hours to make… mostly because I had to figure out how to make the template in Publisher do what I wanted it to do (and it fought me all the way).

Feel free to copy it and use it in your clinical teaching. Let me know if it is useful and how it could be made better. If you make any share them with me.

As I make more of these I will publish them here. I also plan to make Touchcasts of them and will post that here when I do.

 

Enhancing Physicians’ Use of Guidelines

Dr. Peter Pronovost recently penned a viewpoint piece for JAMA about how guidelines can be better implemented. He is well respected in the patient safety realm and clearly feels guidelines are a major way to improve patient safety. I agree that they are a piece of the puzzle. What I wanted to do in this post is critique his thoughts on enhancing guideline use by physicians. I think some of what he proposes is unrealistic at best and most likely impossible.

Let’s take one step back to look at barriers to guideline implementation that were identified in an important review by Cabana and colleagues in 1999. They performed a broad systematic review of 120 different surveys investigating 293 potential barriers to physician guideline adherence.  The figure below outlines what was found.

Barriers to Guideline Adherence

Barriers to Guideline Adherence

With this background let’s analyze Dr. Pronovost’s 5 strategies to increased guideline adherence.

  1. Guidelines should include an “unambiguous checklist with interventions linked in time and space“. He recommends key evidence-based practices. I concur with this recommendation. Checklists are something physicians can do and they have been shown to reduce harmful events. Also this would be behaviorally based and specific (i.e. something measurable). An example might be an item on a checklist to make sure that each day an assessment is made in the chart for the need for continued bladder catheterization. What I worry about is that too many recommendations are made in many guidelines. Recommendations need to be prioritized and limited to those things that really make a difference. The checklists could become so burdensome that they impair patient care as too much time will be spent checking off  the check list and not enough time actually caring for the patient.
  2. Guideline developers should “help clinicians identify and mitigate barriers to guideline use and share successful implementation strategies“.  Here is the impossible. While I agree in principle with this recommendation it can’t be implemented. Barriers are a local phenomenon; often a hyperlocal phenomenon. My hospital has 3 separate primary care practices (1 a resident practice and 2 fulltime provider practices) with very different types of docs and practice patterns. What will work in one of these clinics won’t work in the other. What works for one practitioner might not work with any other. Large centralized guideline developers just can’t be expected to develop solutions to barriers.
  3. Guideline developers could “collaborate to integrate guidelines for conditions that commonly coexist“. Most guidelines are single disease guidelines developed by single specialty groups. Patients are multimorbid. This is a great recommendation that will be tough to implement but that I agree with 100%. Diseases and their treatments interact with each other. Guideline developers often ignore these interactions. At best they will discuss some exceptions to the guidelines for other comorbidities but this isn’t enough. There are enough diabetics with hypertension and coronary artery disease to warrant a guideline on them. What about hypertension with renal disease? The combinations would have to be carefully thought out and the panels multidisciplinary with primary care physicians playing the prominent leadership role.
  4. Rely on systems rather than the actions of individual clinicians. Bravo. Many things are not totally under the physician’s control and there are often too many things to think about now a days for 1 person. Systems need to be engineered to deal with the mundane things we physicians don’t like to deal with (like elevating head of the bed in a ventilated patient. We would rather deal with managing the ventilator). Multidisciplinary teams at each care site would need to be put together to design the processes of care.
  5. Create transdisciplinary teams to develop scholarly guidelines with practice strategies. Not much detail is given in the manuscript about this but what I think he means could be twofold: 1) teams of clinicians, epidemiologists, implementation scientists and systems engineers would develop the guideline and 2) these same types of teams would study best practices for implementing them. Currently many guidelines don’t include implementation scientists nor systems engineers. It’s no wonder we have a hard time implementing guidelines with implementation isn’t really built into them from the start.

Much of this is already known but important to keep saying. Someday guideline developers and policy wonks will listen. Just shoving a guideline in our face isn’t the way to go. Currently reminders and “performance measures” are the main ways guidelines are being implemented. We will see if medical systems develop smart ways to use electronic health records to better implement guidelines.

Reading Journal Article Abstracts Isn’t As Bad As I Thought

Physicians mainly read the abstract of a journal article (JAMA 1999;281:1129). I must admit I am guilty of this also. Furthermore, I would bet that the most often read section of the entire article is the conclusions of the abstract. We are such a soundbite society.

Quick facts

I had always thought the literature showed how bad abstracts were…that they were often misleading compared to the body of the article. But I was wrong. A recent study  published in BMJ EBM found that 53.3% are abstracts had a discrepancy compared to information in the body of the article. That sounds bad doesn’t it? But only 1 of them was clinically significant. Thus most of the discrepancies were not important enough to potentially cause patient harm or alter a clinical decision.

This is good news as effectively practicing EBM requires information at the point of care. Doctors don’t have time to read an entire article at the point of care for every question they have but they do have time to read an abstract. It’s good to know that structured abstracts (at least from the major journals that were reviewed in this study) can be relied upon for information. I especially like reading abstracts in evidence based journals like BMJ EBM or ACP Journal Club as even their titles give the clinical information you need.

“Can EBM Be Patient-Centered?”

I am a member of an international listserv about evidence-based healthcare. One poster asked “Is EBM patient-centered and is patient-centered care evidence based?” It is almost as if he views the 2 as exclusionary. In my experience many people don’t understand the EBM paradigm. This figure shows what EBM is and that it,by definition, is patient-centered.

EBM paradigm

The most important component of the EBM paradigm (the circles are in the order of importance) is patient preferences and actions. An evidence-based decision should consider patient values. Period. Thus, EBM is patient-centered.

Question 2: Is patient-centered care evidence based? It can be but might not be. Patients often don’t want the evidence-based care I offer them like immunizations or colon cancer screening. So they aren’t receiving evidence-based care but they are receiving patient-centered care.

Why Can’t Guideline Developers Just Do Their Job Right????

I am reviewing a manuscript about the trustworthiness of guidelines for a prominent medical journal. I have written editorials on this topic in the past (http://jama.jamanetwork.com/article.aspx?articleid=183430 and http://archinte.jamanetwork.com/article.aspx?articleid=1384244). The authors of the paper I am reviewing reviewed the recommendations made by 3 separate medical societies on the use of a certain medication for patients with atrial fibrillation. The data on this drug can be summarized as follows: little benefit, much more harm. But as you would expect these specialists recommended its use in the same sentence as other safer and more proven therapies. They basically ignored the side effects and only focused on the minimal benefits.

Why do many guideline developers keep doing this? They just can’t seem to develop guidelines properly. Unfortunately their biased products have weight with insurers, the public, and the legal system. The reasons are complex but solvable. A main reason (in my opinion) is that they are stuck in their ways. Each society has its guideline machine and they churn them out the same way year after year. Why would they change? Who is holding them accountable? Certainly not journal editors. (As a side note: the journals that publish these guidelines are often owned by the same subspecialty societies that developed the guidelines. Hmmmm. No conflicts there.)

conflict of interest

The biggest problem though is conflicts of interest. There is intellectual COI. Monetary COI. Converting data to recommendations requires judgment and judgment involves values. Single specialty medical society guideline development panels involve the same types of doctors that have shared values. But I always wonder how much did the authors of these guidelines get from the drug companies? Are they so married to this drug that they don’t believe the data? Is it ignorance? Are they so intellectually dishonest that they only see benefits and can’t understand harm? I don’t think we will ever truly understand this process without having a proverbial fly on the wall present during guideline deliberations.

Until someone demands a better job of guideline development I still consider them opinion pieces or at best consensus statements. We need to quit placing so much weight on them in quality assessment especially when some guidelines, like these, recommend harmful treatment.

Quality Measures That Don’t Matter: The Case Against Influenza and Pnemococcal Vaccination Rate Measures

It struck me today as was listening to the radio and reviewing an email summary of articles that I get every day that 2 “quality” measures that I am held accountable for don’t really help much. I am talking about this year’s influenza vaccine and the pneumococcal vaccine.

Lets start with the influenza vaccine. Where I practice I am supposed to make at least 79% of my patients over age 65 yrs of age take the vaccine. This doesn’t sound too bad right? Why shouldn’t it be 100%? Well the problem this year is that the influenza vaccine mostly sucks in this age group….per the CDC its only 9% effective in persons 65 yrs of age and older . (http://www.cdc.gov/MMWR/preview/mmwrhtml/mm6207a2.htm?s_cid=mm6207a2_w)

What about the pneumococcal vaccine? It should really help people right? I am supposed to make 95% of my patients take this vaccine. Well it kind a sucks also per Moberley S, Holden J, Tatham DP, et al. Vaccines for preventing pneumococcal infection in adults. Cochrane Database Syst Rev. 2012 Jun 22;1:CD000422. DOI: 10.1002/14651858.CD000422.pub3. This well done Cochrane review found that invasive pneumococcal disease was prevented by the vaccine but not pneumonia or mortality. So all it really prevents is bacteremia if you get pneumococcal pneumonia but it actually doesn’t prevent pneumonia. Somewhat of a misnamed vaccine if you ask me.

There is little benefit of these 2 vaccines but yet I am supposed to recommend them to my patients. I don’t have great vaccination rates in my patients. Maybe I won’t feel so bad about that any longer. The policy wonks who make up these rules need to look at the data and measure what’s important. Unfortunately they don’t. They are mired in their measurement mentality without the benefit of an intellect.

Burning money for nothing

Shared Decision Making…..That’s What EBM Is But I Still Don’t Have To Like It.

This week’s New England Journal of Medicine has a Perspective article on shared decision making (http://www.nejm.org/doi/full/10.1056/NEJMp1209500). Shared decision making is basically educating the patient about options and getting them to incorporate their values and preferences into making decisions. That’s what EBM is as is shown below.

EBM paradigm

My passion is teaching EBM principles and practicing it as much as possible. So you will be surprised that I am not extolling the virtues of what the authors suggest. Not that I am against it as I am not. But as I will outline below its not practical right now.

The authors point out that shared decision making is rarely being practiced.

For example, in a study of more than 1000 office visits in which more than 3500 medical decisions were made, less than 10% of decisions met the minimum standards for informed decision making. Similarly, a study showed that only 41% of Medicare patients believed that their treatment reflected their preference for palliative care over more aggressive interventions.

I wonder why? It’s simple….time and resources. I am a primary care internist. I have 20 minutes scheduled per patient. I never get that 20 minutes. My nurses eat up a lot of it. It’s not their fault. Administration has put so many stupid reminders (like doing a homelessness screen or a preferred language screen for an institution where every one that comes there has to speak English in the first place and I don’t even have interpreters) in place that they have to do that I have maybe 5 minutes left of my appointment. So am I supposed to spend the time it would take to go over the decision aids they suggest we use? I could but then there would no time for anything else and I have to tell the patient to reschedule.

Why not just have the patient review it online on their own.” Great idea! EXCEPT that less than 30% of my patients are computer literate. Also have you looked at online decision aids? Many are way too complex for my patients. “Fine…give them a print version.” Sometimes that doesn’t exist and you can’t just hand them many decision aids and expect them to understand it. “What do the green and red smiley faces mean doctor? Which one am I?”

Which all comes back to resources. “Hire a nurse to do it doctor!” Sure….where’s the money for that? I can’t charge for this service (or at least enough to pay for a patient educator trained nurse who will demand a 6 figure salary). The current payment system is broken and until its fixed and we are paid for our time we simply can’t afford to play this game. Unfortunately the authors of this article have no clue to what the trenches are like because their ivory tower is too high for them to see us little people. They don’t understand current practice and the pressures that are on us primary care docs. Maybe we should develop a decision aid for all the idiots who keep thinking up things for us to do that aren’t really practical.