Copyright 2015 NPR. To see more, visit http://www.npr.org/.

Transcript

STEVE INSKEEP, HOST:

People seeking answers to science questions face a constant reality. Many science experiments come up with fascinating results. But the results cannot be replicated as often as you'd think. David Greene spoke with NPR's Shankar Vedantam.

DAVID GREENE, HOST:

So here's the deal. Researchers recently tried to replicate a hundred experiments in psychology that were published in three leading journals. And Shankar's here to talk about that. Shankar, what did they find?

SHANKAR VEDANTAM, BYLINE: They found something very disappointing, David. Nearly two-thirds of the experiments did not replicate, meaning that scientists repeated these studies but could not obtain the results that were found by the original research team.

GREENE: Two-thirds of these original studies, which presumably at least some of them drew some attention, actually turned out to be false when this replication was tried.

VEDANTAM: Yeah, so calling them false is one explanation, David. In fact, there have been some really big scandals recently where researchers have been found to have fabricated the evidence in data. So that's, you know, one possibility. But I was speaking with Brian Nosek. He's is a psychologist at the University of Virginia. He organized this massive new replication effort. He offered a more nuanced way to think about the findings.

BRIAN NOSEK: Our best methodologies to try to figure out truth mostly reveal to us that figuring out truth is really hard. And we're going to get contradictions. One year, we're going to learn that coffee is good for us. The next year, we're going to learn that it's bad for us. The next year, we're going to learn we don't know.

VEDANTAM: When you fail to reproduce a result, David, you know, the first thing we think about is that OK, this means the first study was wrong. But there are other explanations. It could be the second study was wrong. It could be that they're both wrong. Nosek said it's also possible that both studies are actually right. To use his example, maybe coffee has effects only under certain conditions. When you meet those conditions, you see an effect. When you don't meet those conditions, you don't see an effect. So Nosek says when we can't reproduce a study, it's a sign of uncertainty, not a sign of untrustworthiness. It's a signal there's something going on that we don't understand.

GREENE: Well, Shankar, how do scientists respond when their work is checked and, in some cases, disproven?

VEDANTAM: You know, they respond defensively, David. And perhaps it's not surprising. Nosek told me that one of his own studies was tested for replication, and the replication didn't work. I asked him how he felt about his earlier work being shot down.

NOSEK: We are invested in our findings because they feel like personal possessions, right? I discovered that. I'm proud of it. I have some motivation to even feel like I should defend it. But of course, all of those things are not the scientific ideal. There isn't really an easy way to not feel bad about those things because we're human. And these are the contributions that we as individual scientists make.

GREENE: You know, Shankar, if this is a healthy process for scientists to be constantly checking one another, I mean, one problem I see is that doesn't happen that often because most scientists want to sort of be doing original research and not spending a career looking at other research and trying to see if it was right or not.

VEDANTAM: That's exactly right, David. That's one of the big goals that Nosek is focusing on with this new initiative. Research journals also have a big incentive to publish new findings, not necessarily to publish reproductions of earlier findings. Many science organizations are trying to figure out ways to change the incentives so that both researchers and science journals publish more reproductions of earlier work, including results that are mixed or confusing.

GREENE: OK, I mean, this is all well and good if we're to understand that each time a new study comes out, maybe we should view it as sort of an ongoing search for the truth. But does that mean that when we see a big headline about some study, we should just ignore it?

VEDANTAM: Well, this is the problem, David. I think many of us look to science to provide us with answers and certainty when science really is in the business of producing questions and producing more uncertainty. You know, as I was listening to Nosek talk about science, David, I realized there are parallels between the practice of science and the practice of what we do as journalists. You know, we paint a picture of the world every day, whether that's a war zone or financial markets. But we're always doing it in the context of imperfect information. And especially when we're covering things we don't know much about - you know, a big breaking story, what we discover in the first few days is likely to get revised down the road. Now, you can throw up your hands and say, let's not waste time reading or listening to the first draft of history. Let me just wait a month or a year for the whole picture to emerge. But I think most people would say the best information is still valuable, even if it's going to get updated tomorrow. We need to think about scientific studies the same way.

GREENE: Shankar, thanks, as always.

INSKEEP: A new study finds that that is our social science correspondent, Shankar Vendantam, talking with David Greene on MORNING EDITION from NPR News. Transcript provided by NPR, Copyright NPR.

300x250 Ad

Support quality journalism, like the story above, with your gift right now.

Donate