Given the three papers of last week, all of the authors and journals
involved should pay attention to these guidelines, especially with respect to
overlooking or omitting relevant studies (all three papers) and actually
reading cited studies (e.g., Ref 14 in the Henry et al paper):
From: _http://interfaces.journal.informs.org/content/38/2/125.abstract_
(http://interfaces.journal.informs.org/content/38/2/125.abstract)
The Ombudsman: Verification of Citations: Fawlty Towers of Knowledge?
1. _Malcolm Wright_
(http://interfaces.journal.informs.org/search?author1=Malcolm+Wright&sortspec=date&submit=Submit)
([log in to unmask] (mailto:[log in to unmask]) ) and
2. _J. Scott Armstrong_
(http://interfaces.journal.informs.org/search?author1=J.+Scott+Armstrong&sortspec=date&submit=Submit)
([log in to unmask] (mailto:[log in to unmask]) )
_+_ (http://interfaces.journal.informs.org/content/38/2/125.abstract#)
Author Affiliations
1. Ehrenberg-Bass Institute, University of South Australia, Adelaide,
South Australia
2. The Wharton School, University of Pennsylvania, Philadelphia,
Pennsylvania 19104
Abstract
The prevalence of faulty citations impedes the growth of scientific
knowledge. Faulty citations include omissions of relevant papers, incorrect
references, and quotation errors that misreport findings. We discuss key studies
in these areas. We then examine citations to “Estimating nonresponse bias
in mail surveys,” one of the most frequently cited papers from the Journal
of Marketing Research, to illustrate these issues. This paper is especially
useful in testing for quotation errors because it provides specific
operational recommendations on adjusting for nonresponse bias; therefore, it
allows us to determine whether the citing papers properly used the findings. By
any number of measures, those doing survey research fail to cite this paper
and, presumably, make inadequate adjustments for nonresponse bias.
Furthermore, even when the paper was cited, 49 of the 50 studies that we examined
reported its findings improperly. The inappropriate use of
statistical-significance testing led researchers to conclude that nonresponse bias was not
present in 76 percent of the studies in our sample. Only one of the studies
in the sample made any adjustment for it. Judging from the original paper,
we estimate that the study researchers should have predicted nonresponse
bias and adjusted for 148 variables. In this case, the faulty citations seem
to have arisen either because the authors did not read the original paper
or because they did not fully understand its implications. To address the
problem of omissions, we recommend that journals include a section on their
websites to list all relevant papers that have been overlooked and show how
the omitted paper relates to the published paper. In general, authors
should routinely verify the accuracy of their sources by reading the cited
papers. For substantive findings, they should attempt to contact the authors
for confirmation or clarification of the results and methods. This would also
provide them with the opportunity to enquire about other relevant
references. Journal editors should require that authors sign statements that they
have read the cited papers and, when appropriate, have attempted to verify
the citations.
***********************************************
The BEE-L mailing list is powered by L-Soft's renowned
LISTSERV(R) list management software. For more information, go to:
http://www.lsoft.com/LISTSERV-powered.html
Guidelines for posting to BEE-L can be found at:
http://honeybeeworld.com/bee-l/guidelines.htm
|