Dear all:
My background is awash in epidemiology. An emphasis in epidemiology for my Master of Health
Science from Johns Hopkins & a minor in epidemiology & program evaluation from Cornell. If I
babble on, its because this background is kicking in right now. So, I"m sorry - you all may find
this a bit boring, but I can't help myself. The extrapolation from one study that shows no effect
on NEC to whole-scale policy decisions on hospital practices I find to be premature based on this
epidemiology background and it is spurring me to pick over everything about premies and donor
milk with a fine tooth comb. It probably will take me a while to plow through all this literature
even though it is totally irrelevant to my private practice that consists of seeing moms after they
come out of the hospital and trying to finesse my relationshps with the baby nurses who probably
have more influence over these mothers, but my curiousity has been aroused.
When there is a lack of an effect - one should look VERY carefully at the data. Thinking about the
criteria in Thomas Hale, I have to say that he does not use as strict criteria as I would probably
apply - and relies more on plausibility much than probability. BOTH of these are necessary to
establish the chain of causality. This cannot and should not really be established by one study
alone. My example of the one study phenomenon is one of "positive" results and has a different
outcome, but I do believe it is relevant because the reason why the study wasn't beleived was
because they found no effect on morbity and a huge effect on mortality.
There was once a wonderful study done on vitamin A deficiency that showed a 30% reduction in
mortality. The study was excellently randomized. Sufficient sample size. Carefully done
statistics. People criticized the study for being poorly designed, which really was quite incorrect.
While this study was actually excellently designed, it was insufficient to provide a chain of
causality. Why? Because the plausible steps between the treatment and the reduction in mortaltiy
were not established. The study showed NO EFFECT ON MORBIDITY. They did not pick up any
effect of the treatment on diarrhea and respiratory infections. People simply just could not believe
it without the plausible steps in between.
Tens of millions of dollars later, it turns out that this 30% reduction in mortality was entirely right.
The reason why "no effect" was shown for morbidity was because the wrong question was being
asked about diarrhea and respiratory infections. When studies looked at the severity and duration
of the diarrhea rather than the incidence alone, they showed an effect of a 25% reduction. The
reduction in severe measles was even greater - about 50%. Respiratory infections were a bit
tricky - having some contradictory results - most of which were due to symptoms that may have
been part of the healthy function of the immune system combatting the respiratory infection
Furthermore, a metaanalysis of all the the mortality studies showed a range of a 24-30% reduction
in childhood mortality when vitamin A was provided to children who were mildly to moderately
deficient. Some of those studies did show no effect but the overall balance showed the effect and
when you looked at issues of sample size, effectiveness of the treatment, etc you could find the
reason why there was no effect. The originator of the initial study was awarded the Lasker prize in
medicine (he was the Dean of Johns Hopkins School of Hygiene and Public Health). It could just
have easily turned out the other way. Or, if the inital study had showed no effect on mortality
because of a too small sample size or a contamination effect, and no one followed up with further
studies --- we would never have been able to save as many lives in developing countries.
So, moving on:
While intention to treat may be one was to evaluate the "effectiveness" of an intervention it is not
the gold standard for evaluating the "efficacy of a treatment". Efficacy of a particular treatment
requires a much higher standard. On this point, my major dissertation advisor would pound us
over the head and woe to the poor graduate student who forgot this lesson. To really determine
whether a study or (always better) a group of studies establish causality that a treatment is NOT
efficacious, one of the first on the list of most of my former professors is:
1) Did the treatment actually occur? which has several parts:
a) Did the subjects need the treatment to begin with? (this pertains to nutrition in that many
studies are done on those who are not deficient or may even have a surfeit of the nutrient in
question)
b) Was the dose adequate to cause a response?
c) Did the subjects actually receive the treatment?
d) Did the control subjects receive the treatment through a contamination effect?
A no to any of the above would negate the conclusion that one has clearly established causality
that the treatment was not efficacious.
For example, there was one weird study in Bolivia where iodine deficient school children were
treated with iodine. Again, a superbly designed study. Randomized placebo trial. No difference
in performance on cognitive tests after treatment between the groups. The clincher in this case -
all of the children showed increases in their iodine levels and there was virtually no difference
between the groups. So, both groups essentially had the same "treatment"
A dose-response was shown however to urinary iodine levels to performance on cognitive tests
after treatment. This plausibly suggests that there was a response to the improvement in iodine
levels - BUT because it was an association only, did not establish the "probability" that this was
the case. More studies were clearly needed to look into this because the "probability" trial failed
because of contamination and the "plausibility" of the evidence in favor of the effect is insufficient
to establish causality.
A second important issue when no effect is found is whether or not the sample size was large
enough to detect a biologically important difference. The vitamin A studies were enormously
expensive because mortality is a rare event. You need to have a large sample size to really pick
up statistically and biologically relevant differences.
I could go on and on. The study in Pediatrics shows that the particular "intervention" provided was
not effective. I don't find that it established the full chain of causaility to determine that the
"treatment" was not efficacious. That still remains to be established with further studies.
Plus - as in the vitamin A trials, one must look at more than one outcome of a treatment. And
that can be the difference between definitions of disease such as "incidence" versus "severity"
versus "prevalence". And as you can see - vitamin A had multiple impacts on many different types
of morbidity any one of which would have been worth the treatment even had the mortality effect
been negated by further studies.
Remember, that NEC is not the only issue when it comes to breast milk. We do know that there
are plenty of risks of formula that have been well-established in an overwhelming body of
research for full-term babies. Much more research needs to be done before we dump human
donor milk as a potentially viable intervention. And certainly, as with any intervention we have to
define the parameters for its effective use. Populations vary. What happens in an environment
where most very low birth weight or very premature infants simply don't survive, but where
kangaroo care works beautifully among the survivors is very different from the environment where
50 year old women are able to have egg donors and IVF and have their triplets or quadruplets
survive even at very low birth weights.
I have Nancy Wight to thank for spurring me on to look with a much more critical eye at the IQ
studies which may have used much more "mother's own milk" in their results than "pooled human
donor milk". A detail which must be considered.
Best regards,
Susan E. Burger, MHS, PhD, IBCLC
***********************************************
To temporarily stop your subscription: set lactnet nomail
To start it again: set lactnet mail (or digest)
To unsubscribe: unsubscribe lactnet
All commands go to [log in to unmask]
The LACTNET mailing list is powered by L-Soft's renowned
LISTSERV(R) list management software together with L-Soft's LSMTP(R)
mailer for lightning fast mail delivery. For more information, go to:
http://www.lsoft.com/LISTSERV-powered.html
|