LACTNET Archives

Lactation Information and Discussion

LACTNET@COMMUNITY.LSOFT.COM

Options: Use Forum View

Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Subject:
From:
Rachel Myr <[log in to unmask]>
Reply To:
Lactation Information and Discussion <[log in to unmask]>
Date:
Tue, 23 Aug 2005 10:15:25 +0200
Content-Type:
text/plain
Parts/Attachments:
text/plain (77 lines)
Jennifer Stevens asks why the babies who didn't receive *only* the assigned
treatment, be excluded from the analysis entirely.

When the original analysis by intent to treat has been carried out, you can,
later, go through and pick out only the babies who got the assigned
treatment, and see how things stack up then.  But if you substituted that
kind of analysis for the one that includes all the babies enrolled and
randomized to each group in an article reporting on an RCT, you would be
guilty of something at least bordering on scientific fraud.  What happens
after randomization is what you are out to report, not just what happens to
a small subset of the babies after randomization.  

To illustrate this, what if only 2 of 100 babies in each group actually got
the assigned treatment as it was prescribed?  Far-fetched, I know, but if
all the other babies were excluded, you would have no basis on which to do
statistical analysis.  You'd also have lots of reasons to go back to the
drawing board when designing your next study, but that's another issue.

A good thing to look for as a marker of a good epidemiologic research
article is a flow chart, showing how many people were in their pool of
possible participants, and following the entire process showing clearly how
many people disappeared at each step - those who didn't get asked to
participate, those who were excluded for specified reasons, those who
declined to participate how many were randomized to each group, how many in
each group who dropped out along the way so that data were simply not
available on their outcomes, and so on, until you reach the final number,
which should be the exact number of participants included in the statistical
analysis of the results.

Also, as Susan Burger so aptly points out, there is a huge difference
between being statistically significant and being clinically significant as
well.  It seems this study was designed to have sufficient statistical power
to detect differences between groups if the incidence of the outcomes they
were examining had been higher.  The rarer the outcome, the more patients
you will need to enroll in each group in order to detect a STATISTICALLY
significant difference between them.  NEC was less prevalent than expected
in all the groups, so that even though the numerical differences between
groups were striking, it did not achieve statistical significance.  In
plainer terms, this means that the researchers could not say for sure that
the differences in NEC were due to the different treatments; it could have
been chance, based on the accepted statistical criterion for significance.
As a clinician treating individual babies, it behooves one to take a look at
the actual numbers, and to reflect on the difference between clinical and
statistical significance.  Statistical significance can be entirely
uninteresting clinically too.  Research doesn't do the job of reflecting and
reasoning for us - it gives us information that may enable to us to make
better clinical decisions when we reason and reflect on what to do with the
baby and mother in front of us.

Another way the press could have reported on this study was 'results
suggested a dose-dependent, beneficial effect of mother's milk and of donor
milk, but the number of babies enrolled was too small for this to reach
statistical significance'.  In this world of lamentably dumbed-down news
reports, that is probably too complex a message.  But if, as Marsha Walker
posted, there are already hospitals where clinicians are prepared to change
practice based on the mass media version of a single study, the problem
isn't limited to the media.  

For the record, it is highly unlikely that Schanler or any of the other
authors had any influence on the media spin on the story.

Rachel Myr
Kristiansand, Norway
And don't even ASK me about p-values!

             ***********************************************

To temporarily stop your subscription: set lactnet nomail
To start it again: set lactnet mail (or digest)
To unsubscribe: unsubscribe lactnet
All commands go to [log in to unmask]

The LACTNET mailing list is powered by L-Soft's renowned
LISTSERV(R) list management software together with L-Soft's LSMTP(R)
mailer for lightning fast mail delivery. For more information, go to:
http://www.lsoft.com/LISTSERV-powered.html

ATOM RSS1 RSS2