"Where did you read that and where can I find a copy?"
I find myself asking those questions a lot lately. Whether someone is talking about Covid, a new nutrition "documentary" on Netflix, or any other health related topic, they seem to always cite some type of "scientific evidence" to substantiate their claims. However, when people are challenged to produce actual science to back up what they are claiming they generally fall short and revert to saying something like "well, I heard it on a podcast," or "I saw it on the news."
These days it is more important than ever for people to be able to read and interpret a study. Whether we are trying to sift through baseless and often times absurd marketing claims from supplement companies, trying to make informed decisions on what we are eating, or trying to find the best way to exercise, we must be able to do a basic analysis of what the studies actually say. While this post will not be exhaustive, I hope to lay a basic framework for you.
First, a few ground rules before we get started. Number 1: a single study does not *PROVE* anything. The media are experts at saying a "new study shows...." Nothing perks up my skeptical ear more than hearing someone say that a study "shows" or a study "proves" something. It takes a body of evidence to begin to paint a picture (more on that later). Number 2: nothing is ever really "proven." Science is meant to give us evidence based models and statistical probabilities of what happens if we do "x, y, or z." Last, the science is never "settled." Despite what you hear from marketers, advertisers, or the media; true scientists know that science is never settled on given topics (especially something as dynamic and complex as human physiology).
Next, let's get a basic understanding of some different types of studies.
This image from www.foodinsight.org is a good starting point. You'll notice that at the bottom are expert opinion and case report studies. These are the beginning point of scientific research. For these examples I will use nutrition as a comparison point. We see it all too often that people are swayed towards a particular diet based on the opinion of an "expert" or they will cites numerous "studies" that were actually case studies. Case studies are simply the experience of an individual or a small group of individuals who underwent a single intervention and reported some type of unexpected (positive or negative) outcome. Case studies are often times the beginning point of research on a topic but should never be the end point. They can elucidate very interesting outcomes to interventions but they lack something that all studies must have: a control group. The further down we travel in the above image, the more likely the study is to contain bias from the individual publishing the study. Let's look at case studies for example. If a particular physician has seen 20,000 people in their practice, but published case studies showing a particular outcome in 5 different patients, is that relevant grounds to make recommendations? 5 out of 20,000? What happened with the other 19,995?
Next up the ladder we have observational studies, also known as population studies. These are studies that track different groups of people for periods of time and compare the outcomes of different groups at the end. While these studies are sometimes good for finding correlations, they were never meant to prove causation (despite how they are interpreted by the media). Lets say we have two groups of people, and each group is made up of 50 participants. One group is following "x" diet and the other group is following "y" diet. Lets say that 30 out of 50 people in group "x" are smokers, and that 40 out of 50 people in group "y" regularly participate in vigorous exercise. I can already tell you that group "y" will have improved outcomes based on those two variables alone. If we compare these two groups for ten years we will likely see that the exercisers will outlive the smokers, obvious, right? These are known as variables. While statisticians will do their best to account for those variables before publishing the study, the lack of consistency in the two groups will create a lot of variation in the outcomes, and we will never be able to say for sure if "x" or "y" is the healthier diet.
Last up the ladder we get into trials and meta analysis. Trials have something that is very important for studies: control and comparison groups. Instead of doing a long term study looking at "x" vs "y," now we will get three groups of people and assign them to random groups. One group will follow "x" diet, one group will follow "y" diet, and one group will follow "z" control diet. After a period of time of watching people follow these diets and controlling for variables by selecting similar populations we can observe the outcomes at the end and make better conclusions. After we do enough of these trials we can do what is known as systematic review or meta analysis where we combine all of the data from all of the studies to paint a better picture of which is truly better. Ill let you in on a little secret in the nutrition world. When these studies are performed and the diets are compared to a control group (usually controlled for calorie intake) most of the diets perform the exact same compared to calorically restricted controls. This is why we usually start people off with just beginning to control for calories when starting lifestyle changes.
Now, how do we actually read the study? Don't start with the introduction or conclusion. The intro and conclusion tips you off to the author's opinion of the data before you actually read it. Instead start off with materials and methods, and results. Look at what the population was (how many people, were they people or rats, and what was the heterogeneity of the population?). Then read what they actually did in the study. Did they test an intervention on 3 people and not use a control group? That probably isn't a very strong study. You can read these sections in the abstract, but remember what I always say. If you read the abstract, it doesn't mean you read the study. You have to download the study (sometimes they're expensive) and actually read it. Once you have gone through materials and methods and results, NOW you can go back and read the intro and conclusion and ask yourself, does this line up with what the study actually showed? While it is nice to think that the peer review process flushes out opinion based introductions and conclusions, that is not always the case. Everyone has bias, including researchers and peer reviewers.
The truth is, reading studies is very time consuming, difficult, and often times expensive but it is incredibly important to be able to read and decipher things for yourself rather than trusting what you see in advertisements, media, government, and Netflix "documentaries" (I put quotes around documentary because very few of them are actually documentaries, they're usually movies for entertainment). Reading studies usually sends you down a search engine rabbit hole of dissecting terms, statistical methods, and cross referencing citations but I can't state enough how important it is to be able to learn how to do it.