BEE-L Archives

Informed Discussion of Beekeeping Issues and Bee Biology

BEE-L@COMMUNITY.LSOFT.COM

Options: Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Subject:
From:
Allen Dick <[log in to unmask]>
Reply To:
Date:
Wed, 5 Feb 1997 08:06:57 -0700
Content-Type:
text/plain
Parts/Attachments:
text/plain (100 lines)
One of the big criticisms of the Internet and its effect on bee
research and information propagation is the suggestion that the
information disseminated is often less than scientific.
 
This brings up the question of what *is* scientific.  Most of us have
had some exposure to science in high school (and university perhaps),
but for many of us, that was while back.
 
One of the factors that appears to me to have changed in the last few
decades is increasing understanding of statistics and importance of
sample size and internal consistency in the scientific and lay
community, and the routine exposure of experimental results to
statistical analysis.  (I must add here, though that I have seen what
appeared to be sophisticated statistical analysis attached to
published research that was obviously basically flawed  -- GIGO).
 
The result of applying statistical thinking has been to expose what
at first blush appears to be significant differences found in
(limited) experiments to be only *normal variation* naturally
occuring within a sample.
 
If you have a small enough sample, it seems you can prove anything
you want to, especially if you discard results that don't fit your
hypothesis and uncritically accept those that do.
 
A typical illustration of the problem:  a variation of 25 or thirty
pounds in average yield between two groups of 5 hives with 100 and
125 pounds averages respectively appears intuitvely important, but
on examination, often turns out to be meaningless.
 
This is puzzling to those of us who think that such a difference (25
pounds or 25%) should mean a lot, since it is intuitively a big
difference. I think this is at the root of a lot of bogus information
being circulated -- and not just on the Internet, since small samples
can give serious errors, and most can only afford small samples.
 
Doubling the samples helps, and certainty grows rapidly with increase
in numbers, however at least one *independant* replication of the
experiment is necessary before it proves *anything*.  IMO, anyhow,
since outside factors may have been responsible for the results.  It
*is* possible to flip 'heads' tewnty times in a row -- once anyhow.
 
Another problem with very small samples is that it is hard, if not
impossible,  to examine groups within them for internal variation and
filter out the effects.
 
Added to this is the fact that <asbestos underwear on> I have seen
some well known and very well respected peer-reviewed research
lately, offering only one year's results over limited numbers of
hives, and in only one region.  I have seen no indication that
anyone has sought to duplicate the results and yet everyone, high and
low, seems to believe the published results, rather than see them as
a challenge to look further.
 
I am particularly concerned where, in a small sample, some hives are
removed from an experiment due to what is judged by the experimenter
to be extraneous causes of death, etc..  This sometimes takes the
sample to a size too small for significant results,  or introduces
obvious imponderables into the mix, or skews one group and not the
other, yet the results seem to get published, anyhow.
 
I realise that money for Bee Research is scarce, and it is expensive to
run a large enough experiment to be sure that it will not be readily
spoiled.  And it is hard to give up on a ruined experiment, but I am
concerned about the quality of some (much) research that is published.
 
I am also concerned that, when researchers do not set a good example,
that it is understandable that the public falls for bad science and
anecdotal cures.  It also reduces the public's confidence in research
(we're not stupid) and makes funds harder to get (maybe a good
thing).
 
Nontheless, it means that ***we do have to ensure that the researchers
that do good studies get adequate resources to do more than token
work.***
 
As you can probably see from the above, I don't really know what I am
talking about -- not for sure anyhow, but I think I have a point or
two.  I'm hoping some who *do* know will take this up and point our
where I am wrong, and where I am right (I hope I'm right about
something).
 
... And,  in the interest of improving our understanding of how to
design meaningful studies and to evaluate those that we read, I
wonder how the layman can best get a grasp of statistics, short of
taking a university course? (not a bad idea, actually)
 
Is there any easy to use statistical software around on the net
these days and/or good sites that explain these concepts in simple
terms?
 
Regards
 
Allen
 
W. Allen Dick, Beekeeper                                         VE6CFK
RR#1, Swalwell, Alberta  Canada T0M 1Y0
Internet:[log in to unmask] & [log in to unmask]
Honey. Bees, & Art <http://www.internode.net/~allend/>

ATOM RSS1 RSS2