A very interesting study was published earlier this month in the Journal of Clinical Epidemiology assessing publication bias reporting in systematic reviews published in high impact factor journals. Publication bias refers to the phenomenon that statistically significant positive results are more likely to be published than negative results. They also tend to be published more quickly and in more prominent journals. The issue of publication bias is an important one because the goal of a systematic review is to systematically search for and find all studies on a topic (both published and unpublished) so that an unbiased estimate of effect can be determined from including all studies (both positive and negative). If only positive studies, or a preponderance of positive studies, are published and only these are included in the review then a biased estimate of effect will result.
Onishi and Furukawa’s study is the first study to examine the frequency of significant publication bias in systematic reviews published in high impact factor general medical journals. They identified 116 systematic reviews published in the top 10 general medical journals in 2011 and 2012: NEJM, Lancet, JAMA, Annals of Internal Medicine, PLOS Medicine, BMJ, Archives of Internal Medicine, CMAJ, BMC Medicine, and Mayo Clinic Proceedings. They assessed each of the systematic reviews that did not report an assessment of publication bias for publication bias using Egger test of funnel plot asymmetry, contour-enhanced funnel plots, and tunnel effects. RESULTS: The included systematic reviews were of moderate quality as shown in the graph below. About a third of “systematic reviews” didn’t even perform a comprehensive literature search while 20% didn’t assess study quality. Finally, 31% of systematic reviews didn’t assess for publication bias. How can you call your review a systematic review when you don’t perform a comprehensive literature search and you don’t determine if you missed studies?
Of the 36 reviews that did not report an assessment of publication bias, 7 (19.4%) had significant publication bias. Saying this another way, if a systematic review didn’t report an assessment of publication bias there was about a 20% chance publication bias was present. The authors then assessed what impact publication bias had on the estimated pooled results and found that the estimated pooled result was OVERESTIMATED by a median of 50.9% because of publication bias. This makes sense as mostly positive studies are published and negative studies aren’t. Thus, you would expect the estimates to be overly optimistic.
The figure below reports the results for individual journals. JAMA had significant publication bias in 50% of the reviews that didn’t assess publication bias while the Annals had 25% and BMJ 10%. It is concerning that these high impact journals publish “systematic reviews” that are of moderate quality and have a significant number of reviews that don’t report any assessment of publication bias.
Bottom Line: Always critically appraise systematic reviews published in high impact journals. Don’t trust that an editor, even of a prestigious journal, did their job….they likely didn’t.