Humanity

How to Quickly Spot Dodgy Science

January 4, 2017 | The Conversation

Photo credit: Pixabay

Michael J. I. Brown, Monash University

I haven’t got time for science, or at least not all of it. I cannot read 9,000 astrophysics papers every year. No way.

And I have little patience for bad science, which gets more media attention than it deserves. Even the bad science is overwhelming. 700 papers are retracted annually, and that’s a gross underestimate of the bad science in circulation.

I, like most scientists, filter what I read using a few tricks for quickly rejecting bad science. Each trick isn’t foolproof, but in combination they’re rather useful. They can help identify bad science in just minutes rather than hours.

Okay, this looks bad

Good science is often meticulous and somewhat anxious. You discover something new or find something unexpected, and frankly you worry a lot about screwing up. Identifying and addressing what could plausibly go wrong, and then writing that up succinctly, takes time. Lots of time. Months. Even years.

If you’re taking the time to do meticulous science, why not take the time to prepare a good manuscript? Make nice-looking figures, proofread it a couple of times, and the like. It seems obvious enough, which is why a sloppy manuscript or poor grammar can be a warning sign of bad science.

Recently, Ermanno Borra and Eric Trottier claimed to have detected “signals probably from extraterrestrial intelligence”. I thought this was far-fetched, but still worth looking at the paper preprint. An immediate red flag for me was some blurry graphs, and figures with captions that weren’t on the same page.

Was my caution justified? Well, as I dug into the paper more there were other warning signs. For example, the results relied on Fourier analysis, a mathematical method that can be powerful but is also notorious for picking up artefacts from scientific instruments and data processing.

Furthermore, the surprising conclusions relied on a tiny subset of data, and there was no attempt to confirm the conclusions with additional observations. If they were being meticulous, wouldn’t they have taken the time to collect more data and properly format their manuscript? I’m very sceptical of Borra and Trottier’s aliens, as are many of my colleagues.

Of course, there are exceptions to good-looking good science. The announcement of the Higgs boson, which featured fantastic science, included slide designs that did not impress Vincent Connare, the creator of Comic Sans.

To be honest, I’m with Connare on the slides. However, this is a reminder that tricks for quickly flagging bad science are imperfect shortcuts, not absolute rules.

Obviously

“That’s obvious, why didn’t someone think of that before?”

Well, perhaps someone did.

It has recently been claimed that the expansion of the universe may not be accelerating, which seems at odds with some Nobel Prizewinning research. That claim relies on a statistical analysis of supernova data. However, such analyses are nothing new.

Enter keywords into a search engine, and you find many previous studies, but without the unexpected conclusions. That’s a red flag right there.

So what happened? Well, one can study up on supernovae or cosmology, but there are experts on Twitter providing succinct explanations and informed responses.

In plain English, you can only claim that evidence for the accelerating expansion of the universe is marginal if you make make incorrect assumptions about supernova properties and brush aside other key observations. And thus facepalm.

Cosmologist Tamara Davis has noted that such omissions, accompanied by emphasis on a contrarian conclusion, tend to be misleading spin. Unfortunately, such omissions and erroneous assumptions turn up elsewhere too.

Did they just brush aside two decades of cosmology? Paramout

Rank Journals

You may be aware that some scientific journals come with a certain prestige. There are journal rankings, which typically place the journals Nature and Science near the top, and university rankings often use papers in prestigious journals as a proxy for quality.

I don’t care much for journal rankings myself. Nature and Science chase blockbuster results, but this leads to them publishing a few too many wrong and even fraudulent results. For example, the contrarian supernova paper was published in Nature’s Scientific Reports, which is an online, open access, multidisciplinary journal that is published by the same group as Nature.

How do you tell which of these journal articles can be trusted? Shutterstock

While I don’t care for journal rankings, I do care about rank journals. If you submit your research to a decent journal, you have to assume you will (or could) get a meticulous editor and referee. That should force you to take some care with your research. However, if you know your paper will be accepted without proper peer review, then anything goes.

University of Colorado librarian Jeffrey Beall maintains a list of “predatory publishers”, which includes vanity academic publishers that provide a veneer of peer review. Effectively, it is Beall’s list of rank journals, and I treat papers in those journals with suspicion.

So I wasn’t too surprised when a journal on Beall’s list published a paper promoting the chemtrails conspiracy theory. And I wasn’t surprised to find that the paper had serious failings. For better and worse, my cynicism is justified all too often.

The ConversationMichael J. I. Brown, Associate professor, Monash University

This article was originally published on The Conversation. Read the original article.

READ NEXT: Contrary to Popular Belief, Many Scientists Are Religious

Hot Topics

Facebook comments