• Home
  • |
  • Blog
  • |
  • Simple Guide to Nutrition Research

Simple Guide to Nutrition Research

Nutrition research could, for the most part, be described as an exercise in creative writing from which we gain little useful insight.


I’ve had several people ask me questions recently about nutrition research that appears to contradict positions that I hold about what to eat for optimum health.

Almost anyone who lives a lifestyle that contradicts mainstream government advice about what to eat for optimum health will probably have conversations in which they’re suddenly faced with “there is a study that showed…X, Y or Z”.

I’m on record, regularly, as saying that nutritional epidemiology is nothing more than an exercise in creative writing of which my favourite English teacher, Mr Stapleton, would have been proud. If I was able to write fiction as convincingly, I’d probably be a bestselling author.

When you take the time to dig into this stuff, you quickly come to realise that all of the following have NO basis in reliable evidence…

  • High LDL cholesterol causes heart disease
  • Saturated fat causes heart disease
  • Red meat causes cancer
  • Diabetics need to base their diets on carbohydrates
  • Excess protein causes kidney disease
  • Fibre is necessary for proper bowel function
  • I could go on…

The reality is that we really don’t know very much about human nutrition at all.

It's worth stressing that I am not anti-science. I am obsessively anti-junk-science. Sadly, there is a lot of junk science in the nutrition space and it can be hard to separate teh wheat from the chaff.

We also need to be aware that there are and never will be any studies that prove the safety or efficacy of one way of eating over another with any degree of certainty. Why? If you read the "Randomised Controlled Trials" section below, hopefully, it's pretty clear.

Get the PDF Version of This Article

Sign up for my fortnightly note and not only will you get your FREE copy of this article in PDF format, but you'll also receive one email per fortnight containing a digest of my latest articles.

I value your privacy as much as my own. I will never sell, rent or share your details with anyone or for any purpose.


The standard argument against, for example, low carb diets is that there are no studies and everybody who claims a result is “only an anecdote”.

We’re regularly told that that the plural of anecdote is not data. Critics, who like to believe they’re scientists, will tell you this every time you relate your experience.

Actually, this is a misquote. The original Ray Wolfinger quote was the exact opposite: “The plural of anecdote is data.”

The reason I’m so scathing of such people is that they are not following the scientific method. If they were, they’d see a pattern in all these anecdotes and set about investigating what’s going on.

Instead, they simply appeal to the authority which they believe the education certificate on their wall gives them and dismiss the collective experience of a multitude of people as invalid. They’re not scientists in my view no matter what their qualifications say; they are that witless attendee at your dinner party who is the only person whose opinion matters.

Anecdote provides a great place to start when it comes to nutrition research, or any research for that matter. There is clearly something going on, why not try to find out what that something is.

Hierarchy of Evidence

Many people like to invoke the so-called hierarchy of evidence, so I’ll present a quick overview of one hierarchy here.

The truth is that there is even a debate about what falls where on this hierarchy or whether there really is such a hierarchy at all. Some people simply state that evidence is evidence.

However, as you’re likely to be faced with “the hierarchy of evidence” at some point, it’s worth knowing what’s being referred to.

Starting from what would be considered the lowest to highest level of evidence, here it is.


An anecdote is one person’s experience, often referred to nowadays as an n=1 experiment.

Essentially, an anecdote should be a starting point for a researcher; something interesting is going on here, what is it and what can we learn?

I also tend to argue that, on a practical individual level, anecdote is the highest form of evidence because if something is working for me, it’s making a positive difference to the quality of my life and that’s what ultimately matters.

If I share my anecdote with you, you try what I did and it works for you, even better.

Consensus Statements and Expert Opinion

In any discussion about veganism, you’ll almost certainly hear that “studies show that veganism is safe and effective for all stages of life” or words to that effect.

There are no such studies. What is being quoted to you is a consensus statement issued by the American Dietetics Association and the British Dietetics Association who have a bias towards plant-based diets.

(The actual statement was that vegetarianism is suitable for all ages and stages of life, but let’s not let the truth get in the way of a good vegan quote.)

The problem with consensus statements, especially the way they are used in this context is that they stifle scientific debate because they rely on an appeal to authority and how dare you defy authority!

Expert opinion is the other part of this. The weakness of expert opinion is that it’s simply the opinion of one person or a few people. We all have opinions and, no matter how sincerely held, those opinions could be wrong.

Consensus and expert opinion can be useful to us in our search for knowlege, provided we don’t treat them as the gospel truth.

They weren’t nutrition experts, but even Newton and Einstein acknowledged they were wrong in some aspects of their work.


Nutritional epidemiology, also referred to as longitudinal studies, case control studies, cross sectional studies or cohort studies is most of what gets used in nutrition research and there are both good and bad reasons why this is the case.

Unfortunately, much of what emerges from nutritional epidemiology is almost worthless because there are far too many confounding factors when it comes to studying free-living humans.

I discuss the shortcomings of epidemiology later in this article.

Randomised Controlled Trials

Well-designed RCTs could be considered the highest level of evidence because they control for everything other than the intervention itself. If an effect shows up in a well-designed RCT, you know that your intervention caused the effect, not some other variable.

Done well, the trial will have these features among others…

  • The study has enough participants that no dodgy statistics need to be done to see an effect. Statistics have a place but if you can avoid having to manipulate data, all the better.
  • Everyone is treated exactly the same, including being given what might or might not be the intervention
  • Control group - these folks don’t receive the intervention
  • Intervention group - these folks do get the intervention
  • Double blinded - neither the researchers nor participants will know who is receiving the intervention
  • Crossover - after enough time for the intervention to work, the groups are swapped over so that every participant will have had the intervention by the end of the study
  • Washout period - before the crossover, there is a period in which no intervention is given, so that any positive or negative effects of the intervention have time to disperse.

I insist on talking about well-designed RCTs because if you design your trial badly, the data is worthless. It becomes a case of blaming the burger patty for what the bun and fries did.

There is a problem with RCTs for nutrition research and that is that they’re impossible to do for two reasons.

  1. You cannot lock a large group of humans up for a period of two years and control everything they do and everything they eat over that time. That would be considered cruel and unusual punishment.
  2. Even if you could get an ethics committee to approve the study and find enough people willing to be imprisoned and tortured, it would be prohibitively expensive.

Meta Analyses of Randomised Controlled Trials

A meta analysis of RCTs would be the pinnacle of evidence for nutrition research if it was possible to do RCTs in the first place.

In the absence of RCTs, the only meta analyses that can be done is on epidemiological data and because that data is so poor, a meta analysis of those provides evidence that is only a little stronger than the epidemiology itself.

 People often jump up and down shouting about how their meta analysis of epidemiology is the highest level of evidence. They’re right, but only because higher levels of evidence (RCTs and meta analysis of RCTs) are impossible to obtain.

Conflicts of Interest

Before getting to the weaknesses of epidemiology for nutrition research, I feel I need to address conflicts of interest. These are often cited as a reason why a study is invalid. Both sides of any debate use conflicts of interest to discredit studies and I think we need to be careful when doing so.

First, we all have some sort of bias. It’s a normal human trait. Good researchers work hard to make sure their personal bias doesn’t get in the way of truthful reporting of their findings. Sadly, despite their protestations to the contrary, good researchers seem a little thin on the ground.

Second, every piece of research must paid for somehow. While there is no question that some funders have a very clear agenda and will insist that published results must reflect positively on their products or opinion, this is not always the case. Claiming conflict of interest only because of the funding might be a little short-sighted.

To avoid the conflict of interest trap, I think this way…

  • If the funders have a political agenda, I assume bias (e.g. PETA)
  • If the researchers are known to hold a strong opinion in either direction, I’m sceptical.
  • Consensus statements and expert opinion are very open to being affected by conflicts of interest.
  • Any research for which the raw data isn’t available makes me nervous, especially if the funders or researchers are known to have an agenda.

Why Nutritional Epidemiology Won’t Cut It as Good Evidence

Epidemiology started with Jon Snow investigating a cholera outbreak in the mid 1880’s and is an extremely effective tool for the study of infectious disease.

Its adoption for nutrition research has proved far less effective, although those who work in the nutrition research field will almost certainly deny that it’s ineffective to their dying day.

What follow are some of the reasons that nutritional epidemiology is little better than anecdote.

Free living humans

In order to be able to draw any conclusions from nutrition research, the researcher has to have a fair amount of control over what the people in their study do.

Consider how much control anybody else has over your life on a day to day basis.

Even if you were to provide an intervention to one group within your study, how do you KNOW they stuck to it? They might simply hate the taste of margarine (who doesn’t) and go back to real butter.

Food Frequency Questionnaires

Food frequency questionnaires are the tool used in nutrition research to gather data about what free-living people eat. From those questionnaires, researchers compile the data which they analyse and from which they draw their conclusions.

You’d expect that those FFQs would be filled in on a very regular basis throughout a study in order to get accurate results.

In fact, most studies only get the FFQs completed a few times for a study, often at the beginning and end of the study. For epidemiological studies to have validity, they need to be conducted over periods of years. I can’t remember what I ate for dinner last Thursday, let alone a year ago.

FFQs also have to use measures that researchers can then convert to useful data for the study. How many people know how many cups of broccoli they ate last year? How many grams of meat did they eat last month?

The whole FFQ approach is ludicrously impractical and yet we have healthy eating recommendations based on data gathered using them.

Confounding factors

The human health is a complex system, each part of which interacts with every other part.

How would any researcher know that the effect they saw wasn’t the result of an increase or decrease in smoking, a sudden uptake of exercise because the Olympics were on the TV or the effects of a bad flu season?

Specific groups

Much nutrition research is conducted in fairly homogenous groups (gender, age, race etc). What this means is that the researchers cannot reasonably make recommendations to people outside of the groups they studied.

A glaring example is the recommendation that women lower their cholesterol in order to prevent heart disease mortality because research suggested (wrongly as it turns out) that men should do so. The studies had all been conducted on men. A subsequent study showed no heart health benefit for women who reduced their cholesterol but the advice still stands.

What’s more, there have been no studies on children and yet the same cholesterol-lowering, low fat advice is given.

Hard Endpoints vs Surrogate Markers

The diet-heart and lipid hypotheses are great examples of researchers valuing surrogate markers over hard endpoints. The reason is simple: surrogate markers are easier to measure, even if you don’t actually know what they mean.

Hard endpoint examples: heart attack, death

Surrogate marker examples: high total cholesterol, high LDL cholesterol

Many researchers simply accept that if you have these markers (high cholesterol or LDL cholesterol), you automatically have a higher risk of suffering or dying from a heart attack. The fact that there is no good evidence for this belief doesn’t deter them at all.

Every time you see a claim that something puts you at risk for heart attack, you can be pretty sure that nobody in the trial actually died from a heart attack but lots of people in the so-called risk group had high LDL cholesterol.

Correlation and Causation

Most people make the mistake, when they read nutrition research, of assuming that whatever intervention is being discussed must have caused the outcome.

Nothing could actually be further from the truth.

I often use this ludicrous example which I picked up somewhere…

Cases of drowning at the beach increase at the same time as ice cream sales increase. Is it reasonable, therefore, to assume that drownings are caused by ice cream consumption? Or even more ludicrous, that people drowning cause more people to buy ice cream?

In Mice

There is one group in which RCTs can be done: MICE.

While we may be able to learn some stuff from mouse studies, we are not mice and there are significant differences between humans and mice. For example, mice do not tolerate ketosis well at all, humans do.

If the trial was conducted in mice, a fair reply is something along the lines of: “That’s interesting, I’ll be sure to tell all my mouse friends.”

Basic Statistics

I’m not a statistician, so what follows is my understanding of how a friend who is statistician, explained all of this to me. I hope it’s useful, albeit a bit simplistic.

Nutrition research studies contain statistics. And you know what Mark Twain said, “There are three kinds of lies: lies, damned lies and statistics.”

Often, statistics can be used to disguise the truth or make a small finding seem far bigger than it is. Hopefully, my simple explanation will help you to see through all this.

Relative versus Absolute Risk

In the media, research results are almost always reported using relative risk numbers. There is a really good reason for this: relative risk significantly magnifies the risk in the eyes of the reader.

I think this is best explained using an example:

In my imaginary study, I have 2 groups with 1000 people in each.

My intervention is to feed one group the nutrient I’m concerned will kill them (how I got this past an ethics committee is a separate issue).

After my intervention, 2 people in the control group died and 3 people in the intervention group died.

The relative risk in this example is the risk of dying if you’re in one group versus the risk if you’re in the other group.

We calculate relative risk by dividing the numbers who died in the intervention group by the numbers who died in the control group and turning the result into a percentage. On this basis, you have a 50% (often expressed as 1.5) increased relative risk of dying if you eat the food of concern.

Absolute risk, however, tells a completely different story.

The absolute risk is your risk of dying versus all the people who could have died.

We calculate absolute risk increase in my example by dividing 2 into 2000 (=0.1%), dividing 3 into 2000 (=0.15%) and subtracting the first from the second (0.15-0.1=0.05%). So, the absolute risk of dying as a result of eating the food versus not eating it is 0.05%.

50% and 0.05% are significantly different numbers. Can you see why journalists and research organisation press departments like expressing results using relative risk?

Any time you’re faced with such claims, grab your phone, open the calculator and work out the absolute risk. That’s the number you need to be interested in, not the headline grabbing relative risk.

As a quick validity check, it's worth noting that the Bradford-Hill criteria used to assess the validity of studies suggest that there is no point looking further into a study that doesn't have a relative risk number of at least 2 - in other words, double the risk/incidence of what is being claimed. 


In your life, if you refer to something as significant, it means it’s something big and has a noticeable impact on you or someone else. It’s not something you could safely ignore.

In a research paper, that’s not what it means.

A significant finding in nutrition research only means that the effect the researchers found was not the result of pure chance.

I repeat, it might have no real-world effect at all, it was simply not random.

Remember that next time you see a news report that scientists have found “a significant link between A and B.”

Confidence Intervals

If you read the actual data and the graphs, you’ll come across something referred to as confidence intervals (most of the time 95%CI). They’re usually represented on a graph by a little bar.

Confidence intervals are simply the researchers saying that while they don’t know exactly where the real value for their intervention falls, in this case 95% of the time it will fall between value A and value B.

The only thing that I really understood from my statistics friend was that if the confidence interval straddled 1, then you could be pretty sure that there was no difference between the control and intervention groups.


Just a quick note about risk.

The term “risk” is often used in studies. This might not always be the best term to use. It might be better to use the term “incidence” instead. Perhaps I’m nit-picking but “risk” is a rather loaded word.


There is a saying in the scientific world that observational data must be tested with interventional data.

The problem, when it comes to nutrition research is that it’s all but impossible to gather real interventional data that is not confounded by the complexity of free-living human beings.

Whenever you’re confronted by someone claiming that a study shows something, you’d be wise to ask them whether they’ve read the study and whether they could supply you with a copy of the study (“So that I may learn something.”).

Nine times out of ten, you’ll discover that they’re just parroting something they’ve heard from people they trust.

Let’s all be honest, most of the time, we’re also just repeating things we’ve heard from experts we trust. Sending someone to watch a video of your favourite expert isn’t evidence, although it might get them thinking and that’s a good thing. Just be aware that they might ask you to watch the Game Changers pea protein commercial in return!

The bottom line is that there isn’t good evidence for anything in nutrition, other than what we can clearly see from the population-wide experiment that the USDA food guidelines launched in 1977, which clearly show that a grain-based diet, replacing animal fats with seed oils is a great way to make a lot of people metabolically sick.

I hope this is helpful. If you’re an expert on statistics and I’ve made any errors in my explanations, please get in touch; I’d love to explain these concepts more clearly.

Related Posts

Chicken Fajita Recipe

Mayonnaise Recipe

Cauliflower Cheese Recipe

Beef “Bolognese” Recipe

Simple Beef Burger Recipe

Sustainability of Low Carb Nutrition

Will Newton

In over twenty years of coaching, Will has coached everyone from absolute beginners to world champions. His interest in getting the best results for athletes who compete for the love of the sport, rather than as professionals, drives him to find the most effective ways to get results.

{"email":"Email address invalid","url":"Website address invalid","required":"Required field missing"}