Why some science is ignored (even when it's crucial)

March 25, 2016

Articles | Analects | Newsletter

I'll bet you can guess what kind of research gets more press. Research that gets RESULTS, am I right? No one wants to see research that doesn't 'prove' something (although you can never really...

|
|

This is an article from our predecessor, The Dirt Psychology.

I haven't written one of these for this article yet.

No headings in this article!

filed under:

Unfiled: this is an archived article from our predecessor website, The Dirt Psychology.

Article Status: Complete (for now).

I'll bet you can guess what kind of research gets more press. Research that gets RESULTS, am I right? No one wants to see research that doesn't 'prove' something (although you can never really 'prove' anything in science). Which is a shame, because for science to be truly self-correcting (science is supposed to be self-correcting) it's estimated that about 90% of valid hypothesis testing should come up with no results (i.e. the experiment had no effect). You are in luck, however, because it turns out that in actual fact about 90% of research is coming up with positive results!

Uh oh...

This is a phenomenon known as publication bias, and while it's good for our entertainment, it's bad for the validity of our knowledge of the world. You see, the big bad scientific publishing companies (one of the largest evil conglomerates in the world, right next to Monsanto) tend only to accept papers with positive results as those are the ones people buy. Now, scientists gotta get paid and it ain't no thing for a particularly skilful statistician to work some jiggery-pokery on the results to get the 'right' answer. This kind of mendacious mathematics has become so prevalent it even gets its own name - p-hacking.

Humans can see the future

According to social psychologist David Bem and the very well-regarded journal that published his paper, there's evidence that humans can see the future. Yep, precognition is a thing (PDF). Doesn't it seem odd then, that you didn't already know that? Well, that might be because it's probably not true. Several academics have since replicated Bem's work and found no evidence for the same thing. That means, they ran the exact same study with different participants and found nothing. Unless Dave has some kind of exclusive access to the precognitive community that should tell you something. Yet, the journal refused to publish those results. Dr Richard Wiseman, an abnormal psychologist (as in he studies the psychology of weird stuff, he's not particularly peculiar) was one of the replicating academics and he has since gathered a number of these replications to examine the possibility of Bem's results being accurate. He found nothing exciting. The exact job you'd think the journal was designed for, no?

The good news

Many journals are taking a hardline approach to curb this sort of academic enthusiasm (read: perfidy). Looks like it's working too, some meta-studies of meta-studies (if that's not bloody meta, I don't know what is) have found that more recent research is closer to that golden ratio of positive to negative results. But it's still a problem - particularly that p-hacking nonsense. So keep your eyes peeled. If a result looks too good to be true, it probably is.
"If it seems too good to be true, it probably is" - Winnie the Pooh (probably at some point).
Know what else is too good to be true? These seven simple conversation hacks. Well, actually they aren't because they are true and aren't always that simple. While we're on the topic of other people's failings, I'd love to show you how other people failing can stop you from achieving (even if it has nothing to do with you. Turning scholarship into wisdom without the usual noise and clutter, we dig up the dirt on psychological theories you can use. Become an armchair psychologist with The Dirt Psychology.

Articles | Analects | Email me

Ideologies you choose at btrmt.

There are over 2000 of us. Get the newsletter.
Contribute to the site's upkeep by donating.