From: ILPI Support <info**At_Symbol_Here**>
Subject: Re: [DCHAS-L] The Economist: Quantitative Research is Often Wrong
Date: Mon, 28 Oct 2013 15:27:41 -0400
Reply-To: DCHAS-L <DCHAS-L**At_Symbol_Here**MED.CORNELL.EDU>
Message-ID: 71D1D93B-5514-4938-9E54-B597363F4148**At_Symbol_Here**
In-Reply-To <785F30F4-7277-4045-9CD3-8013A52835DF**At_Symbol_Here**>

This article meshes nicely with a paper currently being discussed in the DiVCHED CCCE Newsletter (an online conference in which papers pertaining the Chem Ed are discussed):

If you scroll down to the first Comment on that paper you can read this story:

"More and more, the crucial experimental procedures are buried in dense paragraphs available to the reader in supplementary documents. These supplementary files are often largely ignored by peer reviewers, meaning these important details are often published without critical review. This point is best demonstrated with a recently published Organometallics paper (, wherein the supplementary material describing the synthesis of an organometallic compound is published with the following statements: "please insert NMR data here! where are they? and for this compound, just make up an elemental analysis…" These sentences passed through both the peer review process and an internal review by the editors handling this particular paper without correction."


Still, the irony of economists and other "soft" scientists calling out physical scientists with respect to the reproducibility of results is priceless.

Rob Toreki

Safety Emporium - Lab & Safety Supplies featuring brand names
you know and trust.  Visit us at
esales**At_Symbol_Here**  or toll-free: (866) 326-5412
Fax: (856) 553-6154, PO Box 1003, Blackwood, NJ 08012

On Oct 28, 2013, at 10:49 AM, Ralph B. Stuart <rstuart**At_Symbol_Here**CORNELL.EDU> wrote:

Quantitative Research is Often Wrong

The Economist has a nice analysis of the high probability of wrong quantitative results being published in academic journals. In one example, cancer researchers tried to replicate 53 published studies but could only confirm the findings from 6. In another example, pharmaceutical researchers only got the same result a quarter of the time when repeating 67 so-called "seminal" studies.

The article has a helpful visualization of the statistical outcome of 1,000 research studies under fairly reasonable assumptions: 125 of the studies would be published, containing 80 correct results and 45 wrong results. The remaining 875 studies would have a much higher accuracy rate of 97%, but because they didn't find anything interesting they would not be published. Because of this publication bias, only 64% of published results would be true, in this example, despite following research protocols that produced good accuracy among all (published, as well as non-published) studies.

Ralph Stuart, CIH
Chemical Hygiene Officer

Previous post   |  Top of Page   |   Next post

The content of this page reflects the personal opinion(s) of the author(s) only, not the American Chemical Society, ILPI, Safety Emporium, or any other party. Use of any information on this page is at the reader's own risk. Unauthorized reproduction of these materials is prohibited. Send questions/comments about the archive to
The maintenance and hosting of the DCHAS-L archive is provided through the generous support of Safety Emporium.