Interpreting academic studies

Bad intelligence kills good soldiers.
Andromeda

I think that science matters, and that good science reporting might matter even more. The scientific news cycle sucks. So I’m always glad for resources for journalists that provide actual guidance on how to report on scientific research.

On Journalist’s Resource, Justin Feldman published “Interpreting academic studies: A primer for media”. It is very well written and covers a lot of issues in a short amount of text. It reminds me a little of my one-page info on dealing with psychological research (unfortunately, it’s in German), but Feldman’s text covers more issues.

And Feldman’s “Interpreting academic studies: A primer for media” is well worth reading if you are a journalist (including bloggers) or a scientist. Or even just a person interested in science.

For scientists, this primer can provide you with the topics to ask journalists — prior to an interview — to gauge their knowledge. If you notice that they do not know how science works, or what results mean, this guide might be useful to them. If you can pull it off without appearing condescending. But then again, even in entertainment journalism mistakes are hard to forgive. Unfortunately, it’s easier to notice if a reporter confused the names of actors than when scientific findings get misreported. Given that it’s on a journalist’s resource page, it might not only be easy to understand but also more acceptable.

BTW, if you are interviewed by journalists, ask to record the interview yourself as well. It might be helpful in case you need proof if they misquote you. But in any case, it might also lead to some self-reflection on why some reports got distorted. Did you explain it well? Did you misspoke? Personally, watching a lecture recording of myself was eye-opening in this regard. Even doing what I (and the evaluator) thought was a very good job, some things I could have explained better.

After all, science is too consequential and the public needs access to understandable and correct scientific information. And personally, I still think that my two-page guide on doing better surveys was an interesting project. Perhaps I should rewrite it to make it an instrument to evaluate studies, not only to plan them.

 

P.S.:

I’ve written a comment to Feldman’s primer, but unfortunately, it apparently did not get through moderation. Well, that’s the nice thing about a personal blog, here it is:

Yes! An useful, very well written, and as far as I see correct overview to recommend.

As a researcher, I frequently groan when I see scientific research reported in the news (okay, Twitter is way worse, but they aren’t supposed to be professionals).

While the text is very good and can stand on its own, perhaps these suggestions might be helpful:

Agenda of the researcher
Take the agenda of the researchers into account. Science is a human endeavor, it is done by human beings, who also try to achieve their own goals. But sometimes researchers are more like lobbyists who try to promote a certain agenda. Sometimes with “good” intentions, but it still might deliberately or accidentally bias their results. There is also the pressure to get funding, which is needed to conduct studies, which lead to publications, and thus to a continued career/further research. And funding agencies also have an agenda, or rather the scientists doing the review (human endeavor). There are not only companies trying to produce or prevent certain results, ideologies prevalent in Western society also try to influence research. And especially when it comes to surveys, you can do a lot of (mis)guiding as a researcher. For a two page overview on how researchers can severely bias study results, see http://www.organizingcreativity.com/2013/12/better-surveys/ (link to my private blog)

“systematic reviews” and “meta-analyses”
These have their flaws as well. Usually only statistically significant results get published, leading to a publication bias. There is also the influence of the field in determining what gets published, i.e., scientists sitting on the editorial board/doing the reviews.

Study Results
Regarding the study results, getting a statistically significant p-value isn’t that difficult if you can simply get more people to participate. The more people, the more likely these kinds of tests get significant (although you cannot control the direction of the statistical difference). That’s why effect sizes are so important. As for meaningful, I think it’s important to look at the alternatives. A good scientist should know what the gold standard in treatment is — and use that as comparison, not a placebo. It’s no virtue to be better than nothing (or thinking you got a treatment), if there are interventions that actually have a large effect.

Sample/generalization
Even with representative demographic variables, it’s important to look at a selection bias, including self selection. For example, some very competitive schools are not necessarily better in *providing* a better education, if their students are also on a much higher educational level to begin with (pre-/post comparisons matter). It’s also interesting to look at the sample or parts of the sample that was neglected. For example, only looking for incidents of violence in one group of people, and not in the other.

Perhaps it would be helpful to give a short example for each point, where false interpretation and reporting has lead to negative consequences. After all, science is our best guide and it does influence behavior. It has consequences for how people live their lives. And there’s a difference between raising attention to important issues and misinforming the public. In this sense, isn’t one issue dealing with ‘market forces’ that seem to demand attention grabbing headlines and articles (“click bait”).

But like written, a very good overview. Hopefully resources like this will one day break the scientific news cycle: http://www.phdcomics.com/comics/archive.php?comicid=1174 (link to PhD Comics).