Blogs

blog_1.jpg

Scientific Reproducibility: Does This Pose a Problem for 21st Century Toxicology?

By David Faulkner posted 03-15-2016 16:31

  

In a word: Yes! Efforts by pharmaceutical companies to reproduce the findings of papers have grim statistics: The results of only six of 53 papers could be reproduced by Amgen, and only 14 out of 67 could be reproduced by Bayer. The first speaker of the "Scientific Reproducibility: Does This Pose a Problem for 21st Century Toxicology" session, Glenn Begley of TetraLogic Pharmaceuticals, described the scope of the problem and detailed a number of ways that “scientific sleight-of-hand” is used to make papers appear more robust than they actually are. Begley guided the audience through several papers and picked apart methodological mistakes, misleading graphs, and a questionable statistical techniques that are sadly all too common in published literature—even in reputable journals. The thread introduced by Begley—and carried through the workshop by the other speakers—is that these sleights of hand represent a systemic problem with substantial consequences for researchers, funding agencies, and the public at large.

As Alan Boobis explained in his presentation, with all the pressure to publish and chase after funding, it can be tempting to let slip some of the best practices that we were taught as students, and this temptation becomes a systemic crisis when published papers aren’t held to the same standard as student coursework. When these studies are used to inform policy decisions and regulatory practices, there is potential for significant problems. We base Acceptable Daily Intake on “all the known facts” at time of evaluation. We can’t wait until we have perfect knowledge since we’ll never have perfect knowledge, but we should have more reliability in science.

Speakers Martin Stephens and Judy LaKind provided insight into how we might improve reliability and reproducibility in toxicological and epidemiological studies. They agreed with Boobis that transparency, consistency, and rigorous objectivity must inform the peer review process and that plenty of guidelines and best practices documents exist, but are frequently ignored in whole or in part in many publications. Stephens and LaKind also heavily emphasized the importance of reducing bias before and during the publication process through effective experimental design and statistical methodologies.

The number of empty seats at the session was disheartening because these speakers addressed many aspects of scientific publishing that researchers prefer not to think about, but which are vital to the core of scientific research. Accurate and complete data reporting may not make it easier to publish, given that journals prefer to only publish positive results, but it is vital for the scientific community to know what has been tried and what hasn’t.

We all think of ourselves as rational people, and it can be uncomfortable to think that perhaps we aren’t entirely objective when publishing our work. But people are inherently biased emotional creatures, and we have to work hard to overcome these internal biases when conducting and publishing research. The constant battles for funding and publications create an environment where it’s easy to let these biases get the better of us, and as a result, we may tweak a graph to make it look more convincing or omit data that doesn’t support the story that we want to tell. The key, according to this session’s speakers, is to be aware of these temptations and biases; take steps to prevent them from entering our work; and call them out when we’re reviewing the work of others.

The scientific community has a problem with reproducibility. However, as evidenced by this session, we are well equipped to tackle it—but we first need to acknowledge that we’ve got a problem.

0 comments
0 views