BEE-L Archives

Informed Discussion of Beekeeping Issues and Bee Biology

BEE-L@COMMUNITY.LSOFT.COM

Options: Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Subject:
From:
Jose Villa <[log in to unmask]>
Reply To:
Informed Discussion of Beekeeping Issues and Bee Biology <[log in to unmask]>
Date:
Tue, 13 Apr 2021 10:23:31 -0600
Content-Type:
text/plain
Parts/Attachments:
text/plain (54 lines)
The testing of a null hypothesis is central to most statistical tests.  
The "P-value" associated with the test is clearly not well understood 
by many nor explained by those who should.  Many think simplistically 
that it is the probability that the observed difference or effect is an 
incorrect conclusion.  As explained well in earlier posts here, it is 
the probability of getting a result of such magnitude out of sampling 
from a common population in which there are no differences.  We may be 
stuck with this approach as it is common to most tests- one calculates 
a test statistic out of the data and "looks up" or generates the 
probability that if the results came from a common or no effect 
population they would be as extreme as found.  It works for comparing 
an observed average against some expected value, or comparing two 
averages, or comparing whether the slope of a line is different from 0, 
or the goodness of fit of an observed relationship to a theoretical 
curve.

No matter the test statistic, it typically is derived from formulas 
that have (1) in the numerator an observed average, difference between 
averages or slopes of relationships derived from the collected data, 
which in turn is divided by (2) the variation of the collected data 
points, and that itself divided by (3) the number of samples or data 
points.  The larger the first and third items the more likely to have a 
low "P-value" for the test statistic.  The larger the second (the 
variation in the observations) the lower the statistic and the higher 
the "P-value".

In biological research we may also be stuck with the problem of high 
variation - clusters of observations around means or lines are pretty 
spread out.  No matter how well designed, controlled, executed an 
experiment there is inherent variability and randomness in our 
systems.  It is no surprise that "P-values" for biological research are 
much higher than for chemistry, engineering and physics.  Some of it 
comes from the high variation, some from the difficulty and cost of 
collecting large samples.  By the same token, in biological research, 
the differences that we expect to see as not just statistically 
significant but biologically significant are quite large.  A student 
presentation stating that a group of colonies produced 46.20 pounds of 
honey which was statistically different from another group producing 
46.25 pounds would get a chuckle from the audience and no awards.  
Whereas, 45 compared to 55 pounds might catch people's attention.

With the ease of producing graphics it has become more common to 
display the data in figures which show much better the original data, 
not just the means but the variation around the means.  Also a lot of 
journals now require supplemental information that includes some form 
of the original data.  The calls for "show me your data" appear to be 
less needed.  It is still revealing to see the actual data in figures 
that produces significant P-values in analyses described in the text.

             ***********************************************
The BEE-L mailing list is powered by L-Soft's renowned
LISTSERV(R) list management software.  For more information, go to:
http://www.lsoft.com/LISTSERV-powered.html

ATOM RSS1 RSS2