Wednesday, October 15, 2008

Are Nature articles most likely to be wrong?

As someone who will never, ever, be published in such a highly regarded journal... I'd probably like to think that only flashy garbage gets published there... as an epidemiological study described in the Economist asserts.
It starts with the nuts and bolts of scientific publishing. Hundreds of thousands of scientific researchers are hired, promoted and funded according not only to how much work they produce, but also to where it gets published. For many, the ultimate accolade is to appear in a journal like Nature or Science. Such publications boast that they are very selective, turning down the vast majority of papers that are submitted to them.

The assumption is that, as a result, such journals publish only the best scientific work. But Dr Ioannidis and his colleagues argue that the reputations of the journals are pumped up by an artificial scarcity of the kind that keeps diamonds expensive. And such a scarcity, they suggest, can make it more likely that the leading journals will publish dramatic, but what may ultimately turn out to be incorrect, research.

Dr Ioannidis based his earlier argument about incorrect research partly on a study of 49 papers in leading journals that had been cited by more than 1,000 other scientists. They were, in other words, well-regarded research. But he found that, within only a few years, almost a third of the papers had been refuted by other studies. For the idea of the winner’s curse to hold, papers published in less-well-known journals should be more reliable; but that has not yet been established.

Now, I haven't read the article in question, but I see a lot of problems from what's described here. First, what exactly does "refuted" mean? I certainly think there is a lot of garbage out there and ridiculous conclusions drawn from meagre results... but 1/3 of big deal studies "refuted"? I'm not sure I buy that, unless the definition is very specific and narrow. I may indeed have to check this article out to see their definition, but I have a hard time believing that some super hot article comes out that everybody cites a million times and then a couple of years later they think it's bogus. I find it more likely that high profile fields that can get into Science/Nature are more likely to be contentious and researchers are more likely to be involved in back and forths that go on for years... it certainly strikes me as plausible that in a hot area, people will argue about the interpretations of data constantly (we certainly do), but I don't imagine the data themselves are often called in to question... which is what I would consider "refutation". I've always been taught that you read the Methods and Results sections and draw your own conclusions, since the Discussion is often, yes, complete BS.

The second problem I have is that work published in "less-well-known journals" might not ever get "refuted" because nobody cares enough to bother. Though I suppose you could get around that aspect by making sure they were heavily cited, despite the fact that they are in lower rep journals... though that raises other problems with selection bias, I think.

Though, I will say that I've long believed the scientific publishing is an area tons of systemic flaws, so it's nice to see that somebody is really concerned with examining it.