BEE-L Archives

Informed Discussion of Beekeeping Issues and Bee Biology

BEE-L@COMMUNITY.LSOFT.COM

Options: Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Subject:
From:
James Fischer <[log in to unmask]>
Reply To:
Informed Discussion of Beekeeping Issues and Bee Biology <[log in to unmask]>
Date:
Mon, 12 Apr 2021 10:41:39 -0400
Content-Type:
text/plain
Parts/Attachments:
text/plain (62 lines)
> the p value... was never intended for this purpose

The only "problem" is that the general public and, apparently, many journal
editors, do not quite grasp what "Null hypothesis significance tests" can
and cannot do.

Despite being the predominant inferential statistical method both "finding
effects" or stating that there are none, it just doesn't work like that.
The entire concept of a "null hypothesis" assumes that the null hypothesis
is true; "p" is the probability of the data occurring under the ASSUMPTION
that there is no effect.   It should be obvious that a statistical test
cannot confirm the thing it assumes.  Nothing confirms that which it
assumes!

Statisticians have been saying that "p" should not be used this way for 80
years, and that the  term "statistically significant" should be dropped from
our vocabulary.  The latest cry in the wilderness is this:

Wasserstein, et al (2019). " Moving to a world beyond "p < 0.05". American
Statistician, 73(S1), 1-19.
https://www.pitt.edu/~bertsch/Moving%20to%20a%20World%20Beyond%20p%200%2005.
pdf
https://tinyurl.com/34x46kaw

It includes a desperate plea not found in scholarly publications - "Don't.
Don't. Just.don't."

What is constantly suggested is to use observed "effect sizes", "confidence
intervals", or, perhaps even better in  bio work, equivalence via two
one-sided tests.

The issue of "uncertainty" is somehow perfectly acceptable in polling
opinions, and it well understood by the public, but this has not carried
over into other fields.

I've submitted this kind of observation and more multiple times before, but
the new moderation team seems to have a firmer grasp of math, and is thus
willing to permit plain speaking about just how mistaken/wrong/bogus so much
of what passes for "analysis" actually is, when viewed with any rigor at
all.

I'm not saying that anyone's results are "invalid", I'm saying the opposite
- that there is lots of work out there that is of great value, with good
precision and very narrow uncertainty, which has failed a single
known-inappropriate test,  and thereby not been considered "worth
publishing" by whatever "powers that be".  That sucks for the careers of
lots of people who honestly want to help us beekeeper types.

This is something that can only be fixed "from the bottom up", as journals
will not change unless they are faced with a consistent change from those
submitting papers.  But those submitting papers don't want to fight the
system, they want to be accepted by the system.  It literally is "the cult
of statistical significance".




             ***********************************************
The BEE-L mailing list is powered by L-Soft's renowned
LISTSERV(R) list management software.  For more information, go to:
http://www.lsoft.com/LISTSERV-powered.html

ATOM RSS1 RSS2