A Cartoon About Blinding: Using New Tools Can Be Fun

I had to make a few slides about blinding and decided a cartoon might be fun to make and a graphic way to display the information. As I am getting a degree in educational technology I have a proclivity to try new tools. I found an article on free cartoon making tools and decided to give one a try. It was intuitive and had reasonable features. I had initially planned to draw my own characters and put masks on them but in the interest of time just used the characters already in the program.

Try using new tools when you can and the situation fits. It can be fun and interesting. Always remember that the tool you use should facilitate learning and not just be used because its cool. I felt a graphic would help learners understand blinding more that just a word description.

What do you think?

A comic strip with 3 panels showing single blind (patient only), double blind (patient and researchers) and triple blind (patients, researchers, and statisticians).

SPRINT Trial Misunderstood and Misapplied- Part 1 (Not Knowing Who’s in the Study)

The SPRINT Trial was an important trial for the evidence base in hypertension. Previous studies had shown that intensive BP lowering in patients with type 2 diabetes (<120 vs <140 mm Hg) and in patients who previously had a stroke (<130 vs <150 mm Hg) resulted in no significant benefit in major cardiovascular events (except for stroke in diabetics). The natural question arose about whether tight BP control in patients without diabetes or previous stroke mattered more than less intensive control? This became even more important as JNC-8 recommended less stringent goals than previous JNC guidelines.

Unfortunately I have seen physicians I work with and residents become overzealous in extending SPRINT results to other patient groups, especially those which it excluded. Interestingly, when I question them about SPRINT and who was actually studied they either assumed it was all patients with HTN (because they hadn’t actually read the inclusion/exclusion criteria at all) or knew who it was restricted to but assumed that higher risk patients with diabetes and stroke would equally gain benefit (or even more which seems intuitive).

So my 1st point to make is that you should actually read a study and know who was studied (and importantly who wasn’t) before you start using it.

This seems like an intuitive statement but many of my colleagues and trainees simply haven’t closely examined the study. They have heard abbreviated results in conferences or from faculty in clinic and assume that it applies broadly.

So who was in SPRINT? To be included a patient had to be at least 50 yrs of age, have a systolic BP of 130-180 mm Hg, and be at increased risk of cardiovascular disease (clinical or subclinical CVD, CKD with eGFR 20-59 ml/min, Framingham 10-yr risk >15%, or be over 75 yrs of age).  Patients with diabetes and prior stroke were excluded. Lets see what they looked like by checking out Table 1.

sprint table 1

These patients had pretty good baseline blood pressures and were already on almost 2 anti-hypertensive meds to start. They had fairly good lipid profiles and around 43% were on statins. The majority were nonsmokers and had 20% 10-yr Framingham risk. These patients are somewhat healthier than the patients I see.

Point 2: Compare patients in the study to who you see. Are they sicker or healthier? How would you adjust the results to fit your patients?

Don’t assume the study enrolled the average patient or that your patients will be just like those in the study.

In Part 2 I’ll analyze the intervention and outcome measures of the study.

 

What to do when evidence has validity issues?

I often wonder how different clinicians (and EBM gurus) approach the dilemma of critically appraising  an article only to find that it has a flaw(s). For example, a common flaw is lack of concealed allocation in a randomized controlled trial. Empirical studies show that the effects of experimental interventions are exaggerated by about 21% [ratio of odds ratios (ROR): 0.79, 95% CI: 0.66–0.95] when allocation concealment is unclear or inadequate (JAMA 1995;273:40812). 

bias1

So what should I do if the randomized trial doesn’t adequately conceal the allocation scheme? I could discard the study completely and look for another study. What if there isn’t another study? Should I ignore the data of a perfectly good study otherwise? I could use the study and adjust the findings down by 21% (see above for why) and if the effect of the intervention  still crosses my clinically important threshold then I would implement the therapy. I could use the study as is and assume it wasn’t important because the reviewers and editors didn’t think it was. This is foolish as many of them probably didn’t even recognize the flaw nor would many of them understand the impact.

I don’t have the right answer but wonder what more learned people do. I personally adjust the findings down and determine if I still want to use the information. The problem with this approach is it assumes that in the particular study I am reviewing that the estimate of effect is in fact biased…something I can’t really know.

What do you do?

Learning materials for “Make Your PowerPoints Evidence-Based” workshop

I did a workshop on how to design multimedia slides to be consistent with Mayer’s Cognitive Theory of Multimedia Learning. The workshop materials are below.

 

https://docs.google.com/presentation/d/1zNPsZNNODQxywAH9QZxi6mSgrxh4P08UYThO5ckfP5Q/edit?usp=sharing : These are the Google slides I used for the workshop.

 

Here are 2 handouts that I used:

1. Goals of Instructional Design Handout : reviews methods to reduce extrinsic cognitive load, manage intrinsic cognitive load, and foster germane cognitive load

2. Make Your PowerPoints Evidence-Based handout used during the workshop

Is tinzaparin better than warfarin in patients with VTE and cancer or not?

The CATCH trail results were published this week in JAMA. Read the abstract is below. Do you think this drug is useful for venous thromboembolism (VTE) treatment?

Importance  Low-molecular-weight heparin is recommended over warfarin for the treatment of acute venous thromboembolism (VTE) in patients with active cancer largely based on results of a single, large trial.

Objective  To study the efficacy and safety of tinzaparin vs warfarin for treatment of acute, symptomatic VTE in patients with active cancer.

Design, Settings, and Participants  A randomized, open-label study with blinded central adjudication of study outcomes enrolled patients in 164 centers in Asia, Africa, Europe, and North, Central, and South America between August 2010 and November 2013. Adult patients with active cancer (defined as histologic diagnosis of cancer and receiving anticancer therapy or diagnosed with, or received such therapy, within the previous 6 months) and objectively documented proximal deep vein thrombosis (DVT) or pulmonary embolism, with a life expectancy greater than 6 months and without contraindications for anticoagulation, were followed up for 180 days and for 30 days after the last study medication dose for collection of safety data.

Interventions  Tinzaparin (175 IU/kg) once daily for 6 months vs conventional therapy with tinzaparin (175 IU/kg) once daily for 5 to 10 days followed by warfarin at a dose adjusted to maintain the international normalized ratio within the therapeutic range (2.0-3.0) for 6 months.

Main Outcomes and Measures  Primary efficacy outcome was a composite of centrally adjudicated recurrent DVT, fatal or nonfatal pulmonary embolism, and incidental VTE. Safety outcomes included major bleeding, clinically relevant nonmajor bleeding, and overall mortality.

Results  Nine hundred patients were randomized and included in intention-to-treat efficacy and safety analyses. Recurrent VTE occurred in 31 of 449 patients treated with tinzaparin and 45 of 451 patients treated with warfarin (6-month cumulative incidence, 7.2% for tinzaparin vs 10.5% for warfarin; hazard ratio [HR], 0.65 [95% CI, 0.41-1.03]; P = .07). There were no differences in major bleeding (12 patients for tinzaparin vs 11 patients for warfarin; HR, 0.89 [95% CI, 0.40-1.99]; P = .77) or overall mortality (150 patients for tinzaparin vs 138 patients for warfarin; HR, 1.08 [95% CI, 0.85-1.36]; P = .54). A significant reduction in clinically relevant nonmajor bleeding was observed with tinzaparin (49 of 449 patients for tinzaparin vs 69 of 451 patients for warfarin; HR, 0.58 [95% CI, 0.40-0.84]; P = .004).

Conclusions and Relevance  Among patients with active cancer and acute symptomatic VTE, the use of full-dose tinzaparin (175 IU/kg) daily compared with warfarin for 6 months did not significantly reduce the composite measure of recurrent VTE and was not associated with reductions in overall mortality or major bleeding, but was associated with a lower rate of clinically relevant nonmajor bleeding. Further studies are needed to assess whether the efficacy outcomes would be different in patients at higher risk of recurrent VTE.

When I approach a study with marginally negative results I consider several things to help me decide if I would still prescribe the drug:

  1. Was the study powered properly? Alternatively, were the assumptions made in sample size calculations reasonable. Sample size calculations require several data points. The main ones are: desired power, type 1 error rate, expected difference in event rates between the arms of the trial. The usual offender is the authors overestimating the benefit they expect to see. The authors expected a 50% relative reduction in event rates between the 2 arms of the study. That seems high but is consistent with a meta-analysis of similar studies and the CLOT trial.  They only saw a 31% reduction. This would have meant the study needed more patients and thus is underpowered. (post hoc power 41.4%).
  2. How much of the confidence interval is on the side of being beneficial? Most of the CI in this case is below 1.0 (0.41-1.03). Thus, I pay more attention to this than the p-value (0.07). There is potentially 59% reduction in the hazard of VTE and only a 3% potential increase in VTE. This is a clinically important reduction in VTE.
  3. What are the pros and cons of the therapy? Preventing VTE is important. The risk of bleeding was less in with tinzaparin. Had the bleeding been higher then I might have had different thoughts about prescribing this drug.
  4. Are the results of this trial consistent with previous studies? If so, then I fall back on it being underpowered and likely would prescribe the drug. A metaanalysis of 7 studies found a similar reduction in VTE (HR 0.47).

Thus, I think the study was underpowered for the event rates they encountered. Had there been more patients enrolled they likely would have found a statistically significant difference between groups. I would not anticipate the results shifting from benefit to harm with more patients. It is likely the patients in this trial were “healthier” than patients in the previous trials.  I feel comfortable saying tinzaparin is likely beneficial and I would feel comfortable prescribing it.

This demonstrates the importance of evaluating the confidence interval and not just the p-value. More information can be gleaned from the confidence interval than a p-value.

Do lipid guidelines need to change just because there is a new, expensive drug on the market? NO!

Shrank and colleagues published a viewpoint online today positing that lipid guidelines should return to LDL based targets. I think they are wrong. The use two studies to support their assertion.

First they use the IMPROVE IT study. In this study patients hospitalized for ACS were randomized to a  combination of simvastatin (40 mg) and ezetimibe (10 mg) or simvastatin (40 mg) and placebo (simvastatin monotherapy). The LDLs were already pretty low in this group: baseline LDL cholesterol levels had to be between 50 to 100 mg per deciliter  if they were receiving lipid-lowering therapy or 50 to 125 mg per deciliter if not on lipid lowering therapy (Average baseline LDL was 93.8 mg/dl). The results show minimal benefits as demonstrated below:

IMPROVE IT resultsCurrent guidelines would recommend high potency statin in this patient population. Adding ezetimibe to moderate dose statin is probably equivalent to a high potency statin (from a LDL lowering  perspective). This study (and all ezetimibe studies) should have tested the difference between simva 40-ezetimbe 10 and simva 80mg or atorvastatin 40 or 80mg. So to me IMPROVE IT doesn’t PROVE anything other than a more potent statin leads to less cardiovascular events…something we already know.

Now on to the 2nd argument. They argue that alirocumab (Praluent), the first in a new class, the proprotein convertase subtilisin/kexin type 9 (PCSK-9) inhibitors should lead to LDL guided therapy again. Why? “Early results suggest these drugs have a powerful effect on levels of low-density lipoprotein cholesterol (LDL-C), likely more potent than statins“. A systematic review of studies of this drug shows a mortality reduction but the comparators in these studies was placebo or ezetimibe 10mg. Why? We have proven therapy for LDL and this drug should have been compared to high potency statins. That study will likely not ever be done (unless the FDA demands it) because the companies making this drug cant risk finding that it works only as good as a high potency statin or possibly worse.  Also does this class of drugs have anti-inflammatory effects like statins? Are they safer? This is an injectable drug that has to be warmed to room temperature prior to use and is very costly compared to generic atorvastatin.

In my opinion, no guideline should be changed without appropriately designed outcomes studies for the drugs being recommended. In this case, the risk-benefit margin needs to be impressive to justify the cost as we have dirt cheap potent statins already.

The authors of this viewpoint make no great rational argument for guidelines change other than that there is a new drug on the market and it might work. Lets see if it does and at what cost (both monetary and physiological).

Misconceptions about screening are common. Educate your patients.

An article published online today by JAMA Internal Medicine is very revealing about the misconceptions patients can have about screening, in this case lung cancer screening. This study was conducted at 7 VA sites launching a lung cancer screening program.  Participants underwent semi-structured qualitative interviews about health beliefs related to smoking and lung cancer screening. Participants had some interesting beliefs:

    • Nearly all participants mentioned the belief that everyone who is screened will benefit in some way
    • Many participants wanted to undergo screening to see “how much damage” they had done to their lungs
    • Rather than being alarmed by identification of a nodule or suspicious findings requiring monitoring with future imaging, several participants expressed the belief that identification of the nodule meant their cancer had been found so early that it was currently harmless

From https://upload.wikimedia.org/wikipedia/commons/3/3f/Thorax_CT_peripheres_Brronchialcarcinom_li_OF.jpg

Its important to educate our patients on what screening is and isn’t. They need to understand the role of screening. I like to ask patients what they expect to get out of screening. It can help you discover their misconceptions. They need to understand that they still need to change behaviors (in this case smoking) even if the screening test is negative. I think we all too often just order the screening test because a clinical reminder tells us to without thinking of how it could be interpreted by our patients.

Food for thought: What is the rate of false positive rate of CT scan for lung cancer screening?

Click here and read the results section of this abstract for the answer. Shocking isn’t it.