Scientific research, according to a recent book on experimental design, is the process of determining some property Y about some thing X, to a degree of accuracy sufficient for another person to confirm this property Y.
￼In other words: If it needs to be accurate enough for someone else to confirm it, it needs to be reproducible. Therefore, what distinguishes research from playful observation is reproducibility.
But reproducibility seems to be in a crisis. The oncology research team at Amgen failed to reproduce 47 of 53 published preclinical cancer studies; none of the effects of more than 100 compounds initially reported to lengthen life span in a mouse model of Amyotrophic Lateral Sclerosis (ALS) could be reproduced, and none were successful in human trials, and the number of retracted studies has been increasing.
While most were retracted because of misconduct, 21% of the retractions are related to sloppy science—contamination of reagents, mistakes in statistical analyses, or the authors’ inability to reproduce their own data, according to a 2014 study.
And these retractions are likely only the tip of the iceberg. Case in point: Only 1.4% are due to contaminated cell lines, even though studies have shown that about 15% of cell lines are contaminated with other cells, and 10-15% of cell cultures are contaminated with mycoplasma, a tiny bacterium.
This suggests that erroneous studies are often not retracted. One reason is that researchers who can’t reproduce someone else’s data have almost no place to publish their negative findings, because most journals have little interest in publishing them.
Meanwhile, biological experiments are becoming ever more complex. One high throughput experiment can involve processing thousands or millions of data points. This means that experimental design and statistical knowledge are more important than ever, while training in these areas is often inadequate.
The Burroughs Wellcome Fund has published a handbook to try and address some of these gaps. The handbook, Experimental Quality, discusses the major traps researchers can fall into and how to avoid them. These include confirmation bias; unreliable reagents; small sample sizes; lack of blinding and randomization; the importance of standards; multiple testing and false positives; and recording and reporting experimental procedures and results.
Download your free copy here: