BEE-L Archives

Informed Discussion of Beekeeping Issues and Bee Biology

BEE-L@COMMUNITY.LSOFT.COM

Options: Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Subject:
From:
Mike Rossander <[log in to unmask]>
Reply To:
Informed Discussion of Beekeeping Issues and Bee Biology <[log in to unmask]>
Date:
Wed, 13 Mar 2013 13:40:39 -0700
Content-Type:
text/plain
Parts/Attachments:
text/plain (24 lines)
> I wouldn't play Russian Roulette with a 20 round chamber?  You?

It depends on what's on the other side of the equation.  Statistically, there is a 1 in 38 chance that I will die of an infection picked up in a hospital.  (Source: US National Safety Council, 2009).  For any two people reading this, there is just over a 1 in 20 chance that one of the two will eventually die from visiting a hospital.  Does that mean we should all stop going to the hospital because we're afraid of catching something?
 
Some other examples of decision-based risks:  A 50% confidence rate means that it's as likely as not that what you think you're seeing is the truth.  You need only one extra percentage point to win a civil lawsuit.  We put people in jail (and even to death) at a 90% confidence rate.  (Learned Hand's rule about 'better 10 guilty men go free than one innocent be falsely imprisoned'.)
 
Regardless, that's confidence RATE, not confidence INTERVAL.  Confidence interval starts from the assumption that we can never know the Truth and that all measurements are flawed.  It is a way to express our estimate of how flawed the measurements are.  If I've calculated something with a value of 12, the confidence interval would be to say that I am 95% sure that the unknowable True Value is somewhere between 11 and 13.
 
The other thing to remember is that science isn't about proving things, it's about disproving things.  That's the whole idea of the null hypothesis.  If measurements are fallible, then no one can ever prove zero-effect.  What we can try to do, however, is disprove that hypothesis by showing a some effect.  When the null hypothesis is zero, the confidence interval means "I measured something and my answer is so small that the True Value might still be zero.  I can't (yet) ignore the chance that what I think I'm seeing is really just random error."  That answer, however, is only useful when balancing the costs of a decision.
 
Let's say you use HopGuard to control for mites.  (Not picking on them, just the first that came to mind.)  One of your customers finds out and says that it caused his hair to change color.  You dutifully run a study showing that, to the limits of your ability to measure, HopGuard has no effect on hair color.  That is, your study came back unable to prove at a 95% confidence interval that HopGuard has a non-zero effect on hair color.  The customer thinks you should lower the standard.  You push back that at the standard he wants, it would be as likely as not that the result will be an expression of the random error, not of a real effect.  Without evidence that it's dangerous (and so far, the customer doesn't have any), how long are you going to wait and how much are you going to pay and keep paying waiting for "better data"?  Are you going to revert to the older, far more toxic miticides in the meantime?  They're definitely not safer but
 there is no accusation that coumaphos changes hair color!  -  Okay, the hair color example is a bit silly.  But the same principle applies when balancing the value to growers for Pesticide A against a risk to beekeepers that might or might not have anything to do with A.  If the study was well-conducted and yet still didn't find a provable effect, why should growers give up the potential value?  How long should they bear the burden of proving a negative until it becomes your burden to prove a non-negative?
 
My apologies in advance if I have oversimplified or come across as patronizing in this set of examples.  I am by no means an expert in statistics.  But it bothers me when people get even the most basic concepts wrong and then use those errors to make critical life decisions.
 
Mike Rossander
 
And, yes, we did put a man on the Moon with much less than a 95% confidence rate.  As of the last data I could find, 439 people have been to space.  22 died in their spacecraft and more died in training.  In a 2012 interview, Neil  Armstrong admitted that, at the time, he thought the Apollo 11 mission "had a 90% chance of getting back safely to Earth" and "only a 50-50 chance of making a landing".  'Failure is not an option' was an aspiration, not a statistical statement.

             ***********************************************
The BEE-L mailing list is powered by L-Soft's renowned
LISTSERV(R) list management software.  For more information, go to:
http://www.lsoft.com/LISTSERV-powered.html

ATOM RSS1 RSS2