But is the damage already done?
Remember the study that blew up in the news a few months ago saying that less than half of psychology studies had reproducible results? Well, Harvard researchers decided to take a closer look, and they found that the study itself was filled to the brim with blatant errors and biased research methods.
The original research team, known as The Open Science Collaboration (OSC), consisted of 270 scientists who tried to replicate the results of 100 published psychology studies to determine the reliability of psychological science. When over half of the studies failed, psychology took a major blow as headlines around the world bashed the science for being unreplicable.
But after an in-depth analysis of the data, Harvard researchers now reveal that the OSC made some blatant errors that make the study’s damage to psychology completely unwarranted.
First, the methods of many of the replication studies were remarkably different from the originals — so how could the results be expected to be the same?
The Harvard researchers did the math and found that the “low-fidelity studies” were four times more likely to fail than “high-fidelity” ones, so they argue that the OSC researchers caused their own studies to fail by straying from the original methods.
Further, the OSC used a “low powered” design, and when the Harvard researchers applied this design to a data set known to have a high replication rate, even the highly replicable data set appeared irreproducible.
According to the press release, “The OSC's design was destined from the start to underestimate the replicability of psychological science.”
"If you want to estimate a parameter of a population," said Harvard researcher Gary King, "then you either have to randomly sample from that population or make statistical corrections for the fact that you didn't. The OSC did neither."
The Harvard professors give a particularly awful example of how the OSC botched the study replication process:
The original study involved white students at Stanford University who watched a video of four other Stanford students (three white, one black) discussing admissions policies. During the discussing, one of the white students made offensive comments about affirmative action, and the researchers noted that the observers looked significantly longer at the black student when they believed he could hear the others’ comments.
"So how did they do the replication? With students at the University of Amsterdam!" researcher and Harvard psychology professor Daniel Gilbert said. "They had Dutch students watch a video of Stanford students, speaking in English, about affirmative action policies at a university more than 5000 miles away."
Think that’s bad? That’s not even the worst part, says Gilbert.
He notes that if you dive deeper into the data, you discover that the replicators realized that performing this study in Netherlands might have been a problem, so they decided to run another version of it in the United States.
“And when they did, they basically replicated the original result. And yet, when the OSC estimated the reproducibility of psychological science, they excluded the successful replication and included only the one from the University of Amsterdam that failed,” says Gilbert.
“So the public hears that 'Yet another psychology study doesn't replicate' instead of 'Yet another psychology study replicates just fine if you do it right and not if you do it wrong' which isn't a very exciting headline.”
The OSC paper had a major impact on the field of psychological science — it led to policy changes at many scientific journals and it was Science magazine’s number three “Breakthrough of the Year” across all fields of science. Plus, it seriously eroded the public perception of psychology.
So it is not enough now, in the sober light of retrospect, to say that mistakes were made,” says Gilbert. “These mistakes had very serious repercussions. We hope the OSC will now work as hard to correct the public misperceptions of their findings as they did to produce the findings themselves."