A recent study attempted to replicate the findings of one hundred journal-published, peer-reviewed psychology experiments. The results were overwhelmingly underwhelming:
All of the experiments the scientists repeated appeared in top ranking journals in 2008 and fell into two broad categories, namely cognitive and social psychology. Cognitive psychology is concerned with basic operations of the mind, and studies tend to look at areas such as perception, attention and memory. Social psychology looks at more social issues, such as self esteem, identity, prejudice and how people interact.
In the investigation, a whopping 75% of the social psychology experiments were not replicated, meaning that the originally reported findings vanished when other scientists repeated the experiments. Half of the cognitive psychology studies failed the same test. Details are published in the journal Science.
Oh my. Those are very bad numbers. What’s the explanation? In the words of one of the authors of the study, part of the problem lurks in the way we “do science” these days:
Munafo said that the problem of poor reproducibility is exacerbated by the way modern science works. “If I want to get promoted or get a grant, I need to be writing lots of papers. But writing lots of papers and doing lots of small experiments isn’t the way to get one really robust right answer,” he said. “What it takes to be a successful academic is not necessarily that well aligned with what it takes to be a good scientist.”
Indeed. One of the hallmarks of good science is repeatability. And, as this study shows, repeatability is in short supply these days. And, especially concerning psychology experiments, this is not just a problem for the ivory tower. Psychology experiments can affect the way we see ourselves and each other. In one recent case, which may have been the spark for the 100 Experiments study, students at Stanford University were attempting to replicate the experimental results of a study that claimed that just having a conversation with homosexuals could change the minds of voters who opposed same-sex marriage. When the students couldn’t replicate the results and started to find irregularities in the data, they reached out to the original author. And everything unraveled from there:
The study reported that a 20-minute conversation with a gay or lesbian canvasser could sway someone’s views on same-sex marriage.
It drew from surveys collected by more than 1,000 volunteers over the course of five years, an initiative begun by the Los Angeles LGBT Center after Proposition 8 was approved by California voters.
But senior author Donald Green has withdrawn his support and retracted the study after learning data was falsified by his collaborator Michael LaCour, a graduate student at the University of California, Los Angeles.
Is it a possibility that other research has been politically or ideologically motivated? It’s nearly a certainty, at this point.
So what’s the solution? As I’ve written many times before, science should not be aimed at confirming a consensus. Bureaucracies and committees, whether in the university or the civil government, rarely nurture good scientific work. Most good science has been and will continue to be the product of independent skeptical minds who spend the time, lots of time, necessary to explore various sometimes contradictory, or politically incorrect, hypotheses.
The problem is, time is money. And other things cost money too. Most sociologists aren’t going to be able to fund experiments on their own, so they get grants from universities or the civil government. And, as they say, whoever pays the piper calls the tune.