BEE-L Archives

Informed Discussion of Beekeeping Issues and Bee Biology

BEE-L@COMMUNITY.LSOFT.COM

Options: Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Subject:
From:
Reply To:
Informed Discussion of Beekeeping Issues and Bee Biology <[log in to unmask]>
Date:
Sun, 9 Jan 2000 23:04:48 +1300
Content-Type:
text/plain
Parts/Attachments:
text/plain (126 lines)
The following editorial in the British Medical Journal might be of
interest to some in light of recent discussion about the objectivity of
science.


BMJ 1997;315:759-760 (27 September)

Editorials

Peer review: reform or revolution?

Time to open up the black box of peer review

As recently as 10 years ago we had almost no evidence on peer review, a
process at the heart of science. Then a small group of editors and
researchers began to urge that peer review could itself be examined
using scientific methods. The result is a rapidly growing body of work,
much of it presented at the third international congress on peer review
held in Prague last week. The central message from the conference was
that there is something rotten in the state of scientific publishing and
that we need radical reform.

The problem with peer review is that we have good evidence on its
deficiencies and poor evidence on its benefits. We know that it is
expensive, slow, prone to bias, open to abuse, possibly anti-innovatory,
and unable to detect fraud. We also know that the published papers that
emerge from the process are often grossly deficient. Research presented
at the conference showed, for instance, that reports of randomised
controlled trials often fail to mention previous trials and do not place
their work in the context of what has gone before; that routine reviews
rarely have adequate methods and are hugely biased by specialty and
geography in the references they quote (p 766); and that systematic
reviews rarely define a primary outcome measure.

Perhaps because scientific publishing without peer review seems
unimaginable, nobody has ever done what might be called a placebo
controlled trial of peer review. It has not been tested against, for
instance, editors publishing what they want with revision, and letting
the correspondence columns sort out the good from the bad and point out
the strengths and weaknesses of studies. Most studies have compared one
method of peer review with another and used the quality of the review as
an outcome measure rather than the quality of the paper. One piece of
evidence we did have from earlier research was that blinding reviewers
to the identity of authors improved the quality of reviews,1 but three
larger studies presented at the congress found that it did not. The new
studies
also found that blinding was successful in only about half to two thirds
of cases. One of those studies—by Fiona Godlee from the BMJ and two
colleagues—might also be interpreted as showing that peer review "does
not work." The researchers took a paper about to be published in the
BMJ, inserted eight deliberate errors, and sent the paper to 420
potential reviewers: 221 (53%) responded. The median number of errors
spotted was two, nobody spotted more than five, and 16% didn't spot any.

How should editors—and those deciding on grant applications—respond to
the growing body of evidence on peer review and the publishing of
scientific research? The most extreme sometimes argue that peer review,
journals, and their editors should be thrown into the dustbin of history
and authors allowed to communicate directly with readers through the
internet. Readers might use intelligent electronic agents ("knowbots" is
one name) to help them find valid research that meets their needs. This
position is being heard less often, and at the conference Ron LaPorte—an
American professor of epidemiology who has predicted the death of
biomedical journals2 —took a milder position on peer review. He sees a
future for it. Readers seem to fear the firehose of the internet: they
want somebody to select, filter, and purify research material and
present them with a cool glass of clean water.

Peer review is unlikely to be abandoned, but it may well be opened up.
At the moment most scientific journals, including the BMJ, operate a
system whereby reviewers know the name of authors but authors don't know
who has reviewed their paper. Nor do authors know much about what
happens in the "black box" of peer review. They submit a paper, wait,
and then receive a message either rejecting or accepting it: what
happens in the meantime is largely obscure. Drummond Rennie—deputy
editor (West) of JAMA and organiser of the congress—argued that the
future would bring open review, whereby authors know who has reviewed
their paper. Such a proposal was floated several years ago in
Cardiovascular Research, and several of the editors who were asked to
respond (including Dr Rennie; Stephen Lock, my predecessor; and me) said
that open review would have to happen.3
Indeed, several journals already use it. The argument for open review is
ultimately ethical—for putting authors and reviewers in equal positions
and for increasing accountability.

Electronic publishing can allow peer review to be open not only to
authors but also to readers. Most readers don't care much about peer
review and simply want some assurance that papers published are valid,
but some readers, particularly researchers, will want to follow the
scientific debate that goes on in the peer review process. It could also
have great educational value. With electronic publishing we may put
shorter, crisper versions in the paper edition of the journal and
longer, more scientific versions on our website backed up by a
structured account of the peer review process.

The Medical Journal of Australia and the Cochrane Collaboration have
already made progress with using the internet to open up peer review.
The Australians have been conducting a trial of putting some of their
accepted papers on to their website together with the reviewers'
comments some two months before they appear in print. They invite people
to comment and give authors a chance to revise their paper before final
publication. Contributors, editors, reviewers, and readers have all
appreciated the process, although few changes have been made to papers.
The Medical Journal of Australia now plans to extend its experiment and
begin to use the web for peer review of submitted manuscripts. The
Cochrane Collaboration puts the protocols of systematic reviews on the
web together with software that allows anybody to comment in a
structured way—so long as they give their names.

Protocols have been changed as a result. The collaboration also invites
structured responses to published reviews. These are particularly
important because those who have contributed reviews are committed to
keeping them up to date in response to important criticisms and new
evidence. Dr Rennie predicted a future in which the such a commitment to
the "aftercare" of papers would apply also to those publishing in paper
journals. At the moment papers are frozen at publication, even when
destroyed by criticism in letters columns.

I believe that this conference will prove to have been an important
moment in the history of peer review. The BMJ now intends to begin
opening up peer review to contributors and readers and invite views on
how we should do this. Soon closed peer review will look as
anachronistic as unsigned editorials.

Robin Smith, Editor, BMJ

ATOM RSS1 RSS2