N-of-1 Trial for Statin-Related Myalgia: Consider Conducting These Studies in Your Practice

The March 4th edition of the Annals of Internal Medicine contains an article by Joy and colleagues in which they conducted an N-of-1 trial in patients who had previously not tolerated statins. This is important because patients often complain that they cannot tolerate statins despite needing them.  I have wondered how much of this was a self-fulfilling prophecy because they hear a lot about this from friends and various media outlets. A N-of-1 trial is a great way to determine if the statin-related symptoms are real or imagined.

First, lets discuss N-of-1 trials. What is a N-of-1 trial? It’s a RCT of active treatment vs. placebo in an individual patient. The patient serves as his or her own control thus perfectly controlling for a variety of biases. When is a N-of-1 trail most useful? This design is not useful for self-limited illnesses, acute or rapidly evolving illnesses, surgical procedures or prevention of irreversible outcomes (like stroke or MI). It’s most useful for conditions that are chronic and for with therapy is prolonged. It’s best if the effect you are looking for occurs fairly quickly and goes away quickly when treatment is stopped. These trials are a good way to determine the optimal dose of a medication  for a patient. They are also good to determine if an adverse effect is truly due to a medication. Finally, they are good way to test a treatment’s effect when the clinician feels it will be useless but the patient insists on taking it. How is a N-of-1 trial conducted? Get informed consent from the patient and make sure they understand that a placebo will be part of the study. Next the patient randomly undergoes pairs of treatment periods in which one period of each pair applies to the active treatment and one to placebo. A pharmacist will need to be involved to compound the placebo and to develop the randomization scheme (so as to keep clinician and patient blinded). Pairs of treatment periods are replicated a minimum of 3 periods. There needs to be a washout period between moving from active to placebo and vice versa. The length of the treatment period needs to be long enough for the outcome of interest to occur. Use the rule of 3s here (if an event occurs on average once every x days, then observe 3x days to be 95% confident of observing at least 1 event). What outcome should be measured? Most commonly these types of trials will be conducted to determine the effect of an intervention on quality of life type measures (eg pain, fatigue, etc).  Ask the patient what is the most troubling symptom or problem they have experienced and measure that as your outcome. Have the patient keep a diary or ask them to rate their symptoms on some meaningful scale at certain follow-up intervals. Do this while on active and placebo treatments. You will have to determine how much of a difference is clinically meaningful.  How do I interpret N-of-1 trial data? This can be a little difficult for non-statistically oriented clinicians. You could do the eyeball test and just see if there are important trends in the data. More rigorously you could calculate the differences in means scores of the placebo and active treatment periods. These would then be compared using a t test (freely available on the internet).

Back to Joy and colleagues N-of-1 trial on statins. They enrolled patients with prior statin-related myalgias. Participants were randomly assigned to get the same statin and dose that they previously didn’t tolerate or placebo. They remained on “treatment” for 3 week periods with 3 week washout periods in between. Patients weekly rated their symptoms on visual analogue scales for myalgias and specific symptoms (0-100, with 0 being no symptoms and 100 being the worst symptoms). It was felt a difference of 13 was clinically significant. What did they find? There were no statistically or clinically significant differences between statins and placebo in the myalgia score (4.37) nor on the symptom specific score (3.89). The neat thing the authors did was to determine if patients resumed taking statins after reviewing the results of their N-of-1 trial and 5 of the 8 patients resumed statins (one didn’t because a statin was no longer indicated).

So are statin related myalgias mostly in our patients’ heads? Maybe. This study is by no means definitive because it only enrolled 8 patients but it at least suggests a methodology you can use to truly test if a patient’s symptoms are statin related or not. This is important to consider because the most recent lipid treatment guidelines focus on using statins only and not substituting other agents like ezetimibe or cholestyramine. So give this methodology a try. You and your patients will likely be amazed at what you find.

Conflicts of Interest in Online Point of Care Websites

Full disclosure: I was an Society of General Medicine UpToDate reviewer several years ago and received a free subscription to UpToDate for my services.  I use UpToDate regularly also (through my institution library).

Amber and colleagues published an unfortunate finding this week in the Journal of Medical Ethics. They found that UpToDate seems to have some issues with conflicts of interest by some of its authors and editors.

UpToDate makes this claim on its Editorial subpage: “UpToDate accepts no advertising or sponsorships, a policy that ensures the integrity of our content and keeps it free of commercial influences.” Amber and colleagues findings would likely dispute that the content is “free of commercial influences”.

Amber and colleagues reviewed the Dynamed and UpToDate websites for COI policies and disclosures. They only searched a limited number of conditions on each site (male sexual dysfunction, fibromyalgia, hypogonadism, psoriasis, rheumatoid arthritis, and Crohn’s disease) but their reasoning seems solid: treatments of these entities can be controversial (for the 1st 3) and primarily involve biologics (last 3).  It seems reasonable that expert opinion could dominate recommendations on male sexual dysfunction, fibromyalgia, hypogonadism and that those experts could be conflicted. (Editorial side note: Few doctors recommend stopping offending medications or offer vacuum erection devices instead of the “little blue pill”. Most patients don’t even realize there are other treatments for ED other than “the little blue pill” and its outdoor bathtub loving competitor!- but I digress). The biologics also make sense to me because this is an active area of research and experts writing these types of chapters could get monies from companies either for research or speaking.

What did they find? Both Dynamed and UpToDate mandate disclosure of COIs (a point I will discuss later). No Dynamed authors or editors reported any COIs while quite a few were found for UpToDate. Of the 31 different treatments mentioned for these 6 topic areas evaluated for 14 (45%) of them the authors, editors, or both received grant monies from the company making the therapies mentioned. Similarly 45%  of authors, editors, or both were consultants for companies making these therapies.  For 5 of the 31 therapies authors or editors were on the speakers bureaus for the companies making these therapies. What’s most worrisome is that both the authors and editors were conflicted for the psoriasis chapter. Thus there were no checks and balances for this topic at all!

From blackbeltbartending.com

While finding COIs is worrisome it doesn’t mean that the overall quality of these chapters was compromised nor that they were biased. We don’t know at this time what effect the COIs had. Further study is needed. Unfortunately, this is probably just  the tip of the iceberg. Many more chapters probably suffer from the same issues. Furthermore, traditional textbooks likely have the same problems.

Disclosing COIs is mostly useless. Disclosure doesn’t lessen their impact.  I don’t understand why nonconflicted authors can’t be found for these chapters.  Do we care so much about who writes a chapter that we potentially compromise ethics for name recognition? Those with COIs should not be allowed to write a chapter on topics for which they have a conflict. Period. If UpToDate is so intent on having them maybe they could serve as a consultant  to the authors or a peer reviewer but even that is stretching it. What really bothers me is that the editors for some of these chapters were also conflicted thus leaving no check on potential biases. As Amber and colleagues point out, even though these chapters underwent peer review what do we know about the peer reviewers? Did they have any COIs. Who knows.

So what should all you UpToDate users do? I suggest the following:

  1. Contact UpToDate demanding a change.  They have the lion’s share of the market and until they lose subscribers nothing will likely change.
  2. Check for COIs yourself before using the recommendations in any UpToDate chapter. You should be especially wary if the recommendation begin with “In our opinion….”.  (An all too often finding)
  3. Use Dynamed instead. It has a different layout format than UpToDate but is quicker to be updated and it seems less conflicted. And its cheaper!

JNC 7 or JNC 8: Which Should I Use?

I gave a CME seminar this week on treating hypertension in the elderly and after my presentation a clinical pharmacist asked me an interesting question: “What do you follow? JNC 7 or JNC 8?”.

HTN-660x330

I thought this was an interesting question and one I hadn’t thought about at all. After all shouldn’t an updated guideline trump the previous one? I like JNC 8 because its methodology is more explicit and consistent with IOM principles than JNC 7. One can argue with some of the decisions made about the evidence review (ie that they only included RCTs and ignored systematic reviews and observational data) and be concerned about the degree of conflicts of interest of the panel members. But what JCN 8 did was make life simpler in that the BP goals are easily remembered: <150/90 for those over 60 yrs of age and < 140/90 for everyone else including those with diabetes or CKD (regardless of age). So for these reasons I prefer JNC 8. Is it perfect? No but I suspect they will address many of the concerns critics have expressed and further questions that need to be addressed in future updates (that they promise will come in a timely fashion).

It’s a Sham It Doesn’t Work: Arthroscopic Meniscal Repair

The orthopedic surgeons won’t be happy about this at all. The most common orthopedic procedure is a sham….or should I say no better than a sham surgery. A study published in the New England Journal of Medicine on December 26th should change the current management of meniscal tear.

Sihvonen and colleagues randomized 70 Finnish patients without knee osteoarthritis to arthroscopic partial meniscectomy and 76 patients to a sham operation. These patients had failed at least 3 months of conventional conservative treatment. Patients with traumatic onset of symptoms or with osteoarthritis were excluded.  The authors did something very interesting which both allowed them to more easily perform a sham operation and to get around the ethics of a sham operation: they did a diagnostic arthroscopy and it was during this diagnostic arthroscopy that the patient was randomized. At this point either a standard meniscectomy was performed or an elaborate sham. Everything was done the same postoperatively as far as wound care, rehab instructions, etc. The patients, those who determined the outcomes, and  those who collected and analyzed the data were all blinded to study group assignment. The main outcome measure was knee pain after exercise (at 2, 6, and 12months) using a validated scale and a validated meniscus specific quality of life instrument. They also asked patients at 12 months if they would be operated on again and if they had figured out which arm of the trial they were in.

Prior to looking at the results of this study we need to make sure the study is scientifically valid. Therapeutic studies should meet the following criteria:

  1. Were participants randomized? YES
  2. Was random allocation method concealed? YES the authors used opaque envelopes
  3. Was intention to treat analysis used? YES
  4. Was the groups similar at the start of the study? YES
  5. Was blinding adequate? YES (only the operating room staff weren’t blinded but they didn’t participate in outcome determination)
  6. Were the groups treated equally apart from the intervention? YES
  7. Was follow-up sufficiently long and complete? YES

So I think this study is low risk for bias and I can now move to the results.

figure

This figure shows that the 2 treatment arms had the same effect on the validated measures and the knee pain likert scale. 83% of sham surgery patients reported improvement in knee pain compared to 88.6% of meniscectomy patients. Furthermore, 96 and 93% of patients respectively, reported they would repeat the same procedure they had undergone. 5% of sham surgery patients underwent additional arthroscopy compared to 1.4% of meniscectomy patients (p=ns).

While this study is small it was adequately powered for their outcomes. One could argue the ethics of a sham operation but this methodology is the only way to truly determine if some procedures “work”. Even procedures that don’t really work can have a significant placebo effect and a sham study is a powerful way to control for that effect.

It’s important to point out that these results only apply to patients with non-traumatic degenerative meniscal tears. Interestingly, the authors did a post-hoc subgroup analysis showing that patients with sudden onset of symptoms didn’t have any different outcomes with meniscectomy over sham surgery. There are major limitations with any post-hoc analysis but suggests that now a study needs to be done in a similar fashion in traumatic injury patients.

What’s The Evidence For That? A new series I am starting

One thing I have noticed is that current residents don’t seem to know the evidence supporting a lot of the treatments they use, especially if the studies were done before they started med school.  I also don’t think they really want to read articles from the past when there is so many new things that excite them more. So I came up with a new series of data summaries I am going to make for teaching purposes during rounds to remind the residents and students that there is evidence behind some of what we do.  I designed them to be one pagers and answer what I feel are the key questions on the particular topic I am covering. I also try to follow the Hayne’s 6 S Hierarchy and focus on evidence higher up the pyramid (that isn’t UpToDate or Dynamed). I want to hit the less sexy topics that we encounter a lot on the inpatient medicine service like COPD exacerbation, hepatic encephalopathy, etc.

So here’s the first one I made: What’s The Evidence For That: Steroids for COPD Exacerbation. (Steroids for AECOPD) This took about 1.5-2 hours to make… mostly because I had to figure out how to make the template in Publisher do what I wanted it to do (and it fought me all the way).

Feel free to copy it and use it in your clinical teaching. Let me know if it is useful and how it could be made better. If you make any share them with me.

As I make more of these I will publish them here. I also plan to make Touchcasts of them and will post that here when I do.

 

Journal Club- The UAB Experience

Just about every internal medicine residency program has a journal club.  One could argue about the evidence behind this activity but it seems to serve its purpose if nothing else than to make housestaff read some journal articles (and not just UpToDate!). I think it does serve a purpose of encouraging critical appraisal/thinking about research publications. Doctors will always have to read new research studies. It takes time for studies to be incorporated into secondary publications like Dynamed and UpToDate. Furthermore, not everything makes it into these evidence-based resources.  Also research (published in every journal) is full of biases that lead to departure of the findings from the truth. Critical appraisal is the only way to detect them.

journal club

This is not our flier but one I found on the internet that I thought was interesting

Since 1999 or so I have been intimately involved in the journal club at UAB. At times I have run it completely but now I serve more as a guide and EBM expert for one of the chief residents who puts it all together. I think it has gotten greater buy-in from the housestaff coming from the CMR instead of me.

So I thought I would cover some of what we have done at UAB. Not that we are the world’s beacon for journal club but we have tried alot of stuff over the years. Some of it failed….some of it successful.

Time of day: we have done everything from 8am, noon, to at night at a faculty member’s house. What has gotten the best turnout is 8am before their day gets started.

Article Selection: This has been a debatable topic since day 1. We have done several things:
1) Latest articles in major journals
2) Rotating subspecialty articles (one month cardiology, one month GI, etc)
3) Article chosen by resident based on problems they saw during patient care
4) Article chosen by me to prove an EBM principle
5) Now we seem to be focusing on articles written by UAB faculty so that they can come as an expert guest.
6) We are considering using classics in medicine articles that are the foundation of what we do (eg first article on ACE inhibitors in CHF) because current residents are unlikely to ever read these articles.

Format: We seem to vary this almost yearly:
1) Faculty reviews article and asks questions of the housestaff about what various things mean
2) Teams of residents argue for or against using a drug, etc against another team of residents
3) Each individual reads the article and comes to JC not knowing what they could potentially be asked
4) A handout with Users Guides questions and a few other questions on design or applying the information is given out ahead of time but is only discussed by those willing to answer
5) Same handout given but with individual residents assigned specific questions to answer (this was the first time we could show that the residents actually read the paper ahead of time)
6) Groups of residents work on questions outside of JC on their own time (usually 3rd yr resident assigned to coordinate the group meeting) with the expectation to teach the other groups at JC. (this worked pretty good actually)
7) Last year we went to a flipped learning format where I put alot of material on edmodo.com that the residents were to do ahead of time (if they needed to) with assigned questions to be answered by individual residents. They felt like this was too much work to go thru all the material online.
8) This year we are to perhaps our most successful format (from resident satisfaction standpoint) where a handout of questions is answered in JC as a group project. A faculty expert gives a very short didactic talk at 2 points during JC on a very specific EBM topic related to the article (eg what is a likelihood ratio). The only expectation is that the article is read prior to JC. We still use somewhat of a flipped format where I reference a short video or 2 to watch about topics in the chosen article but its much less time intensive than last year.

I think overall what has been successful for us is when JC has the following elements:
1) Group work. Engaged learning is always desirable.
2) Clinical and EBM faculty expert present. Seems to give the article a little more value.
3) Case-based. We always solve a real world problem. I always tell the CMR making up JC to make sure the residents walk away with something they can use clinically.
4) Flipped light– giving the residents some information, but not too much, that they can review about EBM principles leads to many of them actually watching the videos or reading background papers. They come much more prepared and have a good basic knowledge that we can then build upon.