Sci/tech MPs want peer review, not pal review
UK science publishing not exactly scientific
The House of Commons Science and Technology Select Committee has called for greater integrity and data disclosure in peer-reviewed literature. It recommends that all UK research institutions should have "a specific member of staff leading on research integrity".
MPs are also concerned that research quangos should be wary of the journal "Impact Factor" when commissioning new work. MPs also call for open peer review and pre-publication, saying social network tools can be a boon.
The inquiry into peer review came after the release of the Climategate files, which revealed academics at the University of East Anglia selectively disclosing data needed to replicate their results, hiding from Freedom of Information Act requests, recommending destruction of email trails, and vowing to "redefine" the peer-review process to keep papers they disagreed with out of the publication system.
Andrew Miller MP, Chair of the Committee, said:
"Although it is not the role of peer review to police research integrity and identify fraud or misconduct, it does, on occasion, identify suspicious cases," said select committee chairman Andrew Miller MP. "While there is guidance in place for journal editors when ethical misconduct is suspected, we found the general oversight of research integrity in the UK to be unsatisfactory and complacent."
MPs evidently weren't impressed by the testimony from Philip Campbell, Editor-in-Chief of Nature, who complained at the expense involved in researchers' making their data available for replication.
Making data available
To carry on from Aurthur C. Clarkes comment about advanced technology being indistinguishable from magic, those so-called scientists who don't make their data available are indistinguishable from witch-doctors or wizards.
Peer review = friend review. Not where I publish...
The general consensus amongst scientists I mix with is that science is so specialised that the only people who can properly review your work are your direct competitors. Which generally means a rough ride in peer review - and in a rapidly moving field like computer science sometimes your ideas appearing somewhere else before you get them published.
Of course peer review is blind, but since you know who your competitors are, you have pretty strong suspicions from the style of prose.
While a personal tragedy for some researchers, none of this reduces the quality of the science itself - you need a clear advance in thinking or a really important result to convince your competitors to avoid being on the receiving end of an 'Oxford Sandwich' - no commital praise followed by a single inciteful but possibly trivial observation calling into question your whole paper, followed by more faint praise and a weak reject.
Another problem is that doing a proper review costs a huge amount of time for which you don't get much credit. When you are up against a deliverable deadline (yes, we have them in academia) then guess what suffers?
Of course, for short periods of time peer review can be gamed. But the truth outs before too long http://en.wikipedia.org/wiki/Sokal_affair. The problem with climate science is the normal haphazard way science progresses is simply not fit when you want direct science-driven policy. For all of the faults, academic led clinical trials show how this should work so it is possible to get it right.
So peer review remains least worse alternative to quality control in science.
A problem of scale
Peer review was designed for a time when there were few enough papers published that other researchers could look at a paper and actually reproduce the results for themselves. With so much research and so much data available now, how often does this actually happen and what are the incentives for it to be done? Especially when most research scientists seem to be judged by the papers they have published rather than those they have rigorously reviewed.