This week in the Annals of Internal Medicine another study (http://annals.org/article.aspx?articleid=1359238) has been published showing that biases in studies can lead to inaccurate results. Thus its really important to critically appraise primary studies. Unfortunately few doctors take the time to do so (I suspect, though I don’t have empiric proof to cite) and, despite EBM skills being taught for a decade now, few probably even remember how to do so.
Savovic and colleagues have done the most comprehensive attempt to quantify the effect of 3 design elements on the outcomes of randomized controlled trials: random-sequence generation, allocation concealment, and double blinding. First, what the heck do those terms even mean? In a randomized trial participants are assigned to study groups in a random fashion, akin to a coin flip. No one actually flips a coin but researchers usually use a computer program to generate a random number (random sequence generation) and this number determines the group to which a patient is assigned. For example, if the number is odd the patient goes into the control arm, if the number is even the intervention arm. The number generation needs to be unpredictable (ie random) and not just alternating odd and even numbers. Authors of studies should give enough information on how the random sequence generation was undertaken. As of 2006, only 34% of PubMed indexed trials did this adequately.
We don’t want those trying to enroll a patient into a study to be able to figure out to which arm the patient will be allocated or assigned. We want the allocation concealed. This is blinding of the randomization order or scheme. Concealed allocation helps guard against someone getting preferentially placed in one arm of a trial or another based on their prognosis. We don’t want sicker patients preferentially put in one arm and healthier ones in another. This would clearly bias the findings of the study. In a 2005 study, only 18% of randomized trials indexed in PubMed reported any allocation concealment.
Most doctors understand blinding. What they don’t understand is who should be blinded– everyone possible is the short answer. Blinding the trial participants and trial personnel avoids participants from being treated differently based on the arm of the study they are in. But what if you can’t blind the patients or the study personnel (for example in a study of a surgical procedure vs medical mgmt)? You blind the outcomes assessors. Statisticians should also be blinded. Interestingly, Benjamin Franklin is credited with being the first person to blind participants in a scientific study. Blinding is especially important if the outcomes are subjective (for example quality of life). Conversely, blinding is less important for objective outcomes like death.
Back to the study by Savovic and colleagues. The authors used some sophisticated techniques to acquire and analyze the data and I won’t bore you with the details. Just accept that they did a good job (dont all authors of studies want us to trust them and they usually disappoint us?). What did they find? Inadequately or unclear random sequence generation, allocation concealment and blinding led to exaggeration of intervention effects by an average of 11%. As expected, the effect was greatest for subjective outcomes. The greatest overestimate of treatment effect was seen with inadequate blinding (23% overestimation) followed by inadequate allocation concealment (18% overestimation).
These kind of findings always bother me for 2 reasons:
- We come to the conclusion that interventions are better than they are. We are falsely led to believe in much greater benefit than there likely is. We offer things to patients with the promise of more benefit than they will likely offer.
- Why do these flawed studies get published? Why dont reviewers and editors reject the publication of these studies or at least put a black box warning that the results are biased? I still can’t understand why we publish flawed research without labelling it as such. Why can’t researchers just design the study properly in the first place? It’s not like the elements of good study design are a secret.
What should doctors do to avoid using biased information?
- Read the pre-appraised literature like ACP Journal Club. The articles published in ACPJC are structured summaries of critically appraised articles. To be published in ACPJC a study has to be methodologically sound and clinically important. Articles with important methodological weaknesses will not be published.
- Find answers to questions in evidence-based textbooks, like Dynamed (https://dynamed.ebscohost.com/)
- If you have to read primary studies CRITICALLY APPRAISE THEM! It’s not hard. Each study design has its own set of questions against which you should judge the quality of the study (http://ktclearinghouse.ca/cebm/practise/ca). If you find the study is flawed either throw it away and find another one or realize biases almost always result in overestimation of treatment benefits and adjust your expectations accordingly.