Next Avenue Logo
Advertisement

Which Health Studies Should You Believe?

Use these four questions to assess ever-changing scientific information

By Rita Rubin

Name your poison or your pleasure and you’re likely to find conflicting research findings about its health benefits or harms.

One day, for example, the top health story in the news might be about how chocolate and/or red wine is good for you. The next day it might be about how chocolate and/or red wine isn’t good for you.

How are you supposed to figure out what to believe? It’s difficult, considering that even scientists have a tendency to overblow the significance of their own findings.

(MORE: How to Interpret a Breaking Medical News Story)

“Everyone wants to think that what they have found has important implications,” says Dr. Barnett Kramer, director of the division of cancer prevention at the National Cancer Institute.

Kramer and Dr. Steven Woloshin, a Dartmouth professor of family and community medicine, talked with Next Avenue about to separate the whole wheat from the chaff when it comes to medical research. (Woloshin and Dartmouth colleagues Dr. Lisa Schwartz — who’s also his wife — and Dr. H. Gilbert Welch co-authored Know Your Chances: Understanding Health Statistics, which can be downloaded here from the PubMed Health site.)

Here are the four questions that Kramer and Woloshin says you need to ask when determining the significance of a new health study:

(MORE: Health Information on the Web: Can It Be Trusted?)

1. What kind of a study is it? Laboratory research involving cells or rats can help guide scientists, but humans are a lot more complicated. Unfortunately, just because a drug kills tumor cells in a laboratory doesn’t mean it will work in cancer patients.

To determine whether research findings in animals will hold up in humans, scientists might conduct a randomized controlled trial — considered the gold standard of medical research. In this type of study, people are randomly assigned to different treatment groups; a common randomized controlled trial compares a medication with a sugar pill, or placebo. The goal of randomization is to end up with similar groups of people whose only difference is the type of pill they’re asked to take.

But many of the headline-grabbing medical studies are so-called observational studies. They involve people who, on their own, decided whether they wanted to, say, take vitamin supplements or eat blueberries or drink diet soda.

While observational studies can lead to ideas for randomized controlled trials, they can’t prove that something causes something else. That’s because there could be other differences between people who choose to do something and those who don’t.

Hormone replacement therapy was a prime example of this. Observational studies suggested that the risk of heart disease in postmenopausal women who took estrogen or estrogen plus progestin was half that of postmenopausal women who didn’t take hormones. But the landmark, government-funded Women’s Health Initiative — a randomized trial of more than 16,000 women — found that neither estrogen alone nor estrogen plus progestin protected women from heart attacks or from dying of heart disease.

Of course, sometimes it’s unethical, if not impossible, to conduct a randomized controlled trial. And although a single observational study doesn’t prove anything, Kramer notes that consistent findings in a large body of well-designed observational studies can be convincing. For example, no one ever conducted a randomized trial in which half the participants were told to smoke tobacco cigarettes and the other half were told not to smoke, but a huge body of evidence from observational studies has consistently found smoking to be harmful.

2. What is the source of the information in the news story? More and more, it seems, news organizations are running stories about studies based solely on press releases from the scientists’ academic institution or the journal publishing the research.

“As news organizations have had increasing problems with budgets, there’s an increasing tendency to get out press releases as news,” Kramer says. ”Some of the text is directly lifted from press releases, because it’s so much quicker to do that.”

You usually — but not always — can tell whether a new story is based on a press release. The article might list its sources at the end, or the body of the story might mention that a quote came from a statement, which is another term for press release.

“It’s a great way to get the information out to a large number of writers quickly, but there are so many bad press releases,” Woloshin says.

Advertisement

For example, he says, “when you look at the press releases, the usual formula is to include a quote from the researcher. Those quotes are often terrible. They really spin things and exaggerate the implications of the finding. They really kind of undo whatever is objective about the research.”

You could look up the abstract for the study or in some cases even read the entire paper on PubMed. But, Kramer cautions, “abstracts are sometimes a distant derivative of the actual evidence.”

He suggests several sources of unbiased information about research. For information about cancer, check the National Cancer Institute’s PDQ, which Kramer serves as editor-in-chief. He also recommends checking out the U.S. Preventive Services Task Force, an independent panel of experts in prevention and evidence-based medicine.

(MORE: Overdoing It on Fish Oil May Raise Men's Cancer Risk)

3. Where did the scientists report their findings? Big scientific meetings garner a lot of media coverage, but sometimes the preliminary research presented at them doesn’t pan out, Woloshin notes.

In other words, he says, don’t be surprised if early reports about “breakthroughs” — always be skeptical when you see that word — fade away or change considerably as scientists figure out what’s really going on.

4. How big is the reported benefit or the harm? Coverage of a study might tout that a drug cut the risk of a disease in half compared to another drug or a placebo. But that statistic, known as the “relative risk,” doesn’t really tell you much. What you need to know is the “absolute risk.”

Woloshin uses shopping to illustrate the difference. A 50 percent-off sale sounds like quite a deal, but if the item costs only a buck to begin with, you’re unlikely to drive far to buy it. On the other hand, a 25 percent-off sale could save you a lot of money if the item is, say, a $20,000 car.

So cutting the risk in half might sound wonderful, but if the risk is only one in a million to begin with, cutting it in half isn’t such a big deal, especially if the additional benefit comes with a hefty price tag, either in dollars or side effects.

You can’t always depend on your doctor to understand the difference between relative and absolute risk, Kramer says. “There is evidence that physicians are as easily misled by relative risks as the public,” he adds.

Rita Rubin is a former USA Today medical writer who now writes about health and science for publications including Next Avenue, WebMD and NBCNews.com.

Rita Rubin is a former USA Today medical writer who now writes about health and science for publications including Next Avenue, U.S. News, WebMD and NBCNews.com. Read More
Advertisement
Next Avenue LogoMeeting the needs and unleashing the potential of older Americans through media
©2024 Next AvenuePrivacy PolicyTerms of Use
A nonprofit journalism website produced by:
TPT Logo