Thanks to Khadimir for asking my thoughts on John Ioannidis' finding that a majority of scientific claims about sex-based genetic differences between men and women are poorly supported. Here's the news report in Science.
The paper, published in JAMA, reviewed published claims about the genetic basis for sex differences in ailments such as hypertension, schizophrenia and heart attacks. Ioannidis found that the claims were, for the most part, overstated or the support for them was undocumented.
The Wall Street Journal quotes Dr. Ioannidis:
"People are messing around with the data to find anything that seems significant, to show they have found something that is new and unusual."
In defense of the peer-reviewed research:
1. Sometimes it does take "messing around with data" to uncover leads for future research.
2. Sometimes the data that would support a claim is not all contained within the published report, but this doesn't mean that the claim is false.
3. The process of peer review and responding to editor's suggestions sometimes stretches claims.
4. That few of the results have been replicated is not surprising. Researchers do not get credit for replicating someone else's findings. The proof comes in using the results as the basis for more research.
What Ioannidis is right to point out:
1. The pressure to publish can lead to stretched claims even when there is no outright fraud, and it can lead to trivial results reported as though they are on a stonger footing than they actually are.
2. In medical research, perhaps replication is more important than in some other areas of science.
3. Philip Kitcher has argued that in areas of science where the results most immediately impact human well-being (such as in medicine), scientists should pay more attention to how "well-ordered" their projects are. Are they pursuing a project only for the sake of getting the next grant? Or does it contribute to what we want and need to know? I take it that part of Ioannidis' concern is that researchers comb their results to find anything that passes the test of statistical significance, regardless of whether or not it tells us something worth knowing. This, then, could explain why so few results are replicated and so few false results are retracted. In this light, what Ioannidis raises is a concern not of truth and falsity but of efficiency.
Finally, I don't think it is particularly suspicious that it is sex-linked claims that were studied--even though sex-linked claims have been the target of feminist critique, e.g. by Anne Fausto-Sterling. The same sort of dynamic is true of other genetic research (and I believe Ioannidis has done similar surveys in epidemiology). However, it is worth thinking about the amount of money that is poured into this sort of research. For some of the diseases studied, such as hypertension, we have quite a lot of understanding about treatment and prevention but still don't treat and prevent for other reasons, such as the shameful lack of access to health insurance in the U.S.
Subscribe to:
Post Comments (Atom)
1 comment:
To nitpick a bit, the article discussed sloppy analysis in general and in gender/sex studies in particular as something of immediate importance.
If I have to read another newsmedia article that fudges the difference between "cause" and "correlation" ....
Post a Comment