The Nutrition Detective Part 2

Part II : Statistics

“There are three kinds of lies: lies, damned lies, and statistics.” Benjamin Disraeli      

Unfortunately, even when statements are backed up with “facts and figures”, you must still be skeptical. Remember, you’re a detective looking for the truth, trying to solve the

“crime” of illness. And although it’s very much like a game, winning (enjoying health) is the ultimate and most valuable reward.

Do not be overly impressed by statistics. They must be evaluated just like any statement, because they are, after all, simply numerical claims, subject to misinterpretation and manipulation. Again, it’s not difficult once you know the rules of the game.

1. First, of course, there is the issue of unreliable references. Where did the numbers come from?  Take the ever popular “national taste test” showing that more people choose cola A than cola B. You know the survey is conducted by the manufacturers of cola A, and that is the most unreliable reference of all. In many cases, these so-called surveys are conducted over and over until the desired results are obtained. Then that trial is made public. This is called manipulation, not science. Besides, “more” could mean (and it is quite likely) that out of 100 people, 35 were undecided, 33 chose cola A and 32 chose cola B. Is that meaningful?

Other times, no reference is given at all, such as when you read, “research has shown that…” Unless the subject is one in which there is universal agreement, you have a right to demand what  research, when, and where.

2. Beware of the word “average”. Remember the non-swimmer who drowned trying to wade across a stream with an average depth of two feet. Average can be a very misleading concept, and is used by scientists only when it can be carefully defined.

FUN QUIZ:    You read about an experiment with two groups of athletes. Both groups were timed in a 10 kilometer run. Then group A was given Megamulti every day, while group B received a look-alike capsule filled with cornstarch (placebo). After two months, each group was timed again. The study reports that the average improvement in group A was 6 minutes, compared to an average improvement of only 3 minutes in group B.

Would you run out and buy Megamulti?  Unfortunately, many people would. It seems impressive, after all, that the group that received the Megamulti had two times the improvement of the placebo group. But did they? Let’s do some detective work.      What if the study was conducted with 10 athletes. It is possible that the results looked like this:

Group A (Megamulti)                                                         Group B (placebo)

athlete      change in 10k time                                           athlete             change in 10k time

1                      -26 minutes                                       1                                  -4 minutes

2                      -4 minutes                                         2                                  -3 minutes

3                      +1 minute                                          3                                  -4 minutes

4                      +2 minutes                                        4                                  -3 minutes

5                      -3 minutes                                         5                                  -1 minute

 

Average improvement: 6.0 minutes                     Avg improvement: 3.0 minutes

Now are you impressed? As it turns out, athlete #1 in group A was sick on the first time trial, so he had a tremendous improvement. This distorts the results to appear as if the whole group had dramatic results. As you can see, group A’s athletes #3 and 4 actually did worse on the second time trial. In fact, looking at the data, there is absolutely no scientific evidence that Megamulti had a positive effect on the athletes.

Is there anything that would make such a study valid?

3. Well, a larger study group would be a good start. “Average” becomes more meaningful as the data pool increases in size. If, for example, group A contained 100 athletes, #1’s unusual improvement would not weigh so heavily in the results.

Among 1,000 athletes, it would be even less of a factor.

Another approach often used by careful researchers is to remove the highest and lowest scores from each group in order to eliminate these aberrations from the data. Others include the high and low values, but make special note of them in what is called the standard deviation. When the standard deviation (roughly the average variation within a group) approaches or exceeds the difference between the groups (as in our hypothetical example), the results are said to have no statistical significance.

Summary

How can you tell if statistics are reflecting the truth?

1. Check sample size. Large numbers of subjects means more reliable data.

2. Watch out for “average” results unless standard deviation is given.

3. Check scientists. Was the research conducted by competent individuals?

4. Check references. Research from the Tibetan Center for Intergalactic Studies (or other unknown sources) should be seriously questioned.

5. Are the reported results supported by previously published literature?

Leave a Reply

Your email address will not be published. Required fields are marked *