Subject: | |
From: | |
Reply To: | |
Date: | Sat, 30 Oct 2021 17:07:02 -0400 |
Content-Type: | text/plain |
Parts/Attachments: |
|
|
In the "dissection" of "papers" here, I often find myself "defending the
papers", striving to maintain an "informed" discussion.
The argument presented in the cited paper "Why Most Published Research
Findings Are False" https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1182327/
is itself provably false.
See https://doi.org/10.1371/journal.pmed.0040168 for the refutation
To summarize the refutation:
(a) Using a "p" of 0.05 for all "statistically significant" results rather
than the actual p-value found e.g., "p = 0.001" lowers the overall average,
of course.
(b) Introducing a "bias" term into a Bayesian model at a described "minimal"
level (of 10%) dramatically diminishes the evidential impact of a finding.
This is, in essence, assuming "bias" to be ubiquitous and significant when
in most methodologies, it is not. For example neutrino detectors have no
room for "bias", the weight of a bee colony has no inherent bias, and so
on.
(c) The mathematical proof offered in support of a claim that a "hotter
area" of research results in more "false findings" was profoundly flawed.
What the proof showed was only that with more studies published on a
subject, the absolute number of false positive (and false negative) studies
increased. It does not show any increase in the percentage of false
positive/negatives, so quality does not go down with more R&D focus on an
area.
So, Most Published Research Findings are NOT False. Some are, but the
process still is worthy of trust.
***********************************************
The BEE-L mailing list is powered by L-Soft's renowned
LISTSERV(R) list management software. For more information, go to:
http://www.lsoft.com/LISTSERV-powered.html
|
|
|