BEE-L Archives

Informed Discussion of Beekeeping Issues and Bee Biology

BEE-L@COMMUNITY.LSOFT.COM

Options: Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Subject:
From:
"Adrian M. Wenner" <[log in to unmask]>
Reply To:
Informed Discussion of Beekeeping Issues and Bee Biology <[log in to unmask]>
Date:
Tue, 18 Oct 2005 20:09:50 -0700
Content-Type:
text/plain
Parts/Attachments:
text/plain (112 lines)
Wow, all those comments on amateur experiments have provided a really
interesting set of exchanges!

    Aren't you glad that extracting season is over and that you all have
a better opportunity to sound off?


Mike Rossander's assessment differed from the following two comments
posted by John Edwards (though he liked the others):


>> Second, somehow, insulate yourself from outside and/or vested
>> interests by walling yourself off (financially) in an independently
>> funded or gov't. organization.
>> Third, divest yourself of any financial links to industry.

   John's comments represent an ideal.  After a couple of episodes of
attempted arm twisting when consulting for industry, I avoided such
entanglements thereafter and thus agree with John.


Bill Trusdell wrote:

"I think the greatest chance of a flaw is when the researcher is
responsible for creating a computer model and then uses it to find
results. It is nearly impossible not to inject bias into the model to
arrive at the desired result. Since we are flawed, the more we inject
ourselves into the experiment, the greater the potential error."


    Very good!

    Put another way:  perhaps one of the greatest problem with
experiments occurs when researchers try to gather evidence in support
of their belief system and then ignore existing evidence counter to
those beliefs.

    Consider what happens when a friend travels to a gambling casino.
If that friend wins big, you will surely hear about it!   However, if
that friend loses, you will likely hear nothing.  To hear winners talk,
one would think casinos are out there to give away money.

    An amateur researcher may conduct an experiment and gain some
"supportive" results.  You will surely learn of those results.  But
what if the results don't come out so good?  You will likely hear
nothing!

    All of this relates, of course, to exchanges on BEE-L about food
grade mineral oil, small cell size, oxalic and formic acids, etc.

    At the end of August, I was the keynote speaker ("Odor and honey bee
exploitation of food crops") at the Third European Congress on Social
Insects and also gave a talk in a symposium at that Congress:  ("Can
European honey bees coexist with varroa mites").  In that symposium
talk I provided an overview of the varroa mite problem and the various
attempts to survive that onslaught.  Others gave valuable input,
especially Ingemar Fries of Sweden.  Malcolm Sanford attended that
Congress and has chronicled what transpired there for publication in
upcoming American Bee Journal issues.

    In that varroa symposium, we had quite a spirited exchange about
varroa mite treatments.  For instance, a given experiment may have
yielded some "good" results but many more "bad" results that you will
hear nothing about.  (I can provide very specific examples of this
selection of results by "establishment" scientists, as they attempt to
prove their point.)

    To compliment John Edwards" list, may I emphasize the following:

1)  John wrote:  "Any research which starts out to prove a pet
hypothesis has already prostituted itself, and is only anecdotal."

    My thought:  Experiments should be sincere attempts to disprove
prevailing hypotheses.  The use of blind and double-blind experimental
designs helps control against bias, but the use of those designs seem
sadly lacking in bee research experiments.

2)  No one should embrace conclusions until quite disinterested persons
have replicated said experiments, provided all the evidence obtained,
and obtained the same results.

    My thought:  We continue to see examples of scientists (maybe
especially scientists) who label experiments as "conclusive" or
"elegant" even before those experiments have been replicated by others.

    However, mere replication is not always sufficient.  If proper
controls have not been incorporated into the original experimental
designs, replication just confuses the issue and compounds error.  As
Cal Lindegren wrote in 1966, "The flaws of a theory never lead to its
rejection. ... Scientists tolerate theories that can easily be
demonstrated to be inadequate."

    A recent and highly publicized experiment dealt with "radar
tracking" of bees.  We can ask:  How many experiments were run compared
to how many results were published?  Did they run the experiment blind
(or double blind)?  Were they eager for a particular set of results?
The experimenters made questionable assumptions, obtained correlations
with small sample sizes, and concluded that they had direct evidence
for the hypothesis they were trying to prove.  However, they did not
cite papers that contained much evidence counter to their conclusions.

    For a more comprehensive treatment of objections to the radar
tracking experiment, one can access:
www.beesource.com/pov/wenner/radar.htm

                                                                                Adrian

"For what a man more likes to be true, he more readily believes."
Francis Bacon (1561-1626)

-- Visit www.honeybeeworld.com/bee-l for rules, FAQ and  other info ---

ATOM RSS1 RSS2