What can we do to reduce the amount of research that goes unpublished?

An excellent piece was published by Paul Glasziou and Iain Chalmers about the large percentage of research that goes unpublished. As they note, the estimate that 50% goes unpublished is likely an underestimate. Unfortunately they didn’t offer any significant solutions other than we need a “better understanding of the causes of, and cures for, non-publication.”

A “simple” solution is for the drug/device approval process to require all studies related to that product and/or conducted/funded by the requesting company be registered and published. This would miss studies done after drug/device approval or done by independent parties but a large number of nonpublished studies are conducted or funded by the companies that market the drug/device. This would also miss all the other studies not directly related to drug/devices (e.g. epidemiological studies).

Another significant challenge is where to publish this information. The web makes the most sense as this is the cheapest route of publication. Maybe the FDA (or some international commission) could have a page(s) on each drug that includes full text access to all studies done on that drug/device. Would these need peer and editorial review? Yes, but a daunting task as we already struggle to find willing and competent peer reviewers. FDA budgets shrink repeatedly and this would be a significant financial burden.

What I really wanted to do in this post was to give my thoughts on a  question raised by Jon Brassey (Director of the TRIP Database):

  • What is better a large RCT or a SR based on a “biased subsample”?

Is a large RCT more desirable than a systematic review (SR) based on a biased subsample of studies? This has been a conundrum for some time. You can argue both sides of this. The reason he says biased subsample is that we know more positive studies get published than negative, larger effects get published more than small effects, etc. Is the answer to this question “it depends”? It depends on your goals: a more precise estimate of the biased effect (favors SR), more generalizability (favors SR), a potentially more methodologically sound result (favors RCT). What is interesting to consider is that the same study repeated over and over will result in a distribution of results (this is why it shouldn’t surprise us that when we do seemingly the same study we don’t get the exact same result). Should we repeat studies? When should we stop repeating the studies (i.e. when have we adequately defined the distribution of results)?

I don’t think we can really answer this question as both of these study types have limitations but if I had to pick one I would rather have a large RCT that is well done than a SR based on a limited subset of the data especially considering we don’t know what is missing and the effect seen in those missing studies.