Deception & Misdirection
Not letting ethics get in the way of a good science story or irreproducibility in the way of a good study
[Continuing our series on deception in politics and public policy.]
Due diligence in press coverage of scientific studies? Most of the time, reporters can’t be bothered.
Any journalist who writes a first-generation story based on a scientific study is attesting to the fact that he or she (a) has read the study and (b) has concluded independently that the study is, in all probability, valid—that the study was actually conducted as the study’s author claimed, that proper scientific procedures were followed as the study was carried out, and that the results were fairly analyzed.
(“First-generation” means that the story is the first one to report on the results of the study, or is among the first wave of such stories. Requiring the same of every journalist who later cites a study-based story would be an impossible standard to meet, but even second-generation and later stories should display appropriate skepticism.)
The journalistic standard is: “If your mother tells you she loves you, check it out.” Yet, when it comes to their stories based on supposed scientific studies, reporters almost never bother to read the studies.
Here’s an example. In September 2014, hundreds of newspapers published a version of this story (http://www.nytimes.com/2014/09/09/us/climate-change-will-disrupt-half-of-north-americas-bird-species-study-says.html ):
The Baltimore oriole will probably no longer live in Maryland, the common loon might leave Minnesota, and the trumpeter swan could be entirely gone.
Those are some of the grim prospects outlined in a report released on Monday by the National Audubon Society, which found that climate change is likely to so alter the bird population of North America that about half of the approximately 650 species will be driven to smaller spaces or forced to find new places to live, feed and breed over the next 65 years. If they do not — and for several dozen it will be very difficult — they could become extinct.
The four Audubon Society scientists who wrote the report projected in it that 21.4 percent of existing bird species studied will lose “more than half of the current climactic range by 2050 without the potential to make up losses by moving to other areas.” An additional 32 percent will be in the same predicament by 2080, they said.
Politico considered its version of the story so important that, in its print edition, it placed the story on the front page above the name of the publication.
The stories about the Audubon report shared one important characteristic: Not a single one of them was written by a reporter who had read the report. How do I know that? Because the report wasn’t finished. It was, at the time of the press coverage, in the process of being reviewed for publication. (The New York Times article had a link that appeared to lead to the study, but it was actually to an Audubon Society webpage that contained a cartoon explaining the study.) A scientific study that hasn’t completed the peer-review process isn’t a real study that can be cited, as has been noted by Global Warming activist and Harvard professor Naomi Oreskes.
Only a few news stories about the Audubon study mentioned that it wasn’t actually finished yet. One exception: Michele Berger’s piece on the Weather Channel website (https://weather.com/science/news/climate-change-could-hurt-half-north-americas-birds-20140909 ), which noted: “Though the work hasn’t yet been published in a peer-reviewed journal, [the lead author] said two manuscripts are in the late stages of the process and a third is about to be submitted.”
The news stories about the Audubon report were based not on the study itself, but on the press release about the report, or on other news reports about the Audubon report, or, perhaps, on that cartoon on the Audubon website.
The next time you see an article based on a scientific study, try to find a copy of the study. Often, it’s nowhere to be found, or it’s behind a paywall, which keeps out any reporter who can’t or won’t pay $20 or $30 or whatever the fee is. (That’s pretty much all reporters.)
Even if a reporter can find a copy, that wouldn’t help much in most cases. As is clear from coverage of Global Warming, the percentage of reporters who are competent on matters of science—who can read a scientific study and make an informed judgment about its validity—is very small.
With any scientific study, there are questions about whether the study was well designed, and whether it was well conducted, and whether the resulting data were interpreted reasonably. The key, as it is in all science, is replicability (reproducibility). (A magazine of science-related satire is named The Journal of Irreproducible Results.)
The key question is: If someone else does the same research or conducts the same experiment, will he or she get the same result?
If it’s not replicable, it’s not science.
Often, it’s not replicable.
Consider a recent effort by The Reproducibility Project.
Smithsonian magazine (http://www.smithsonianmag.com/science-nature/scientists-replicated-100-psychology-studies-and-fewer-half-got-same-results-180956426/ ) reported on an effort to replicate studies in peer-reviewed psychology journals.
According to work presented today in Science, fewer than half [actually, almost 60 percent] of 100 studies published in 2008 in three top psychology journals could be replicated successfully. The international effort included 270 scientists who re-ran other people’s studies as part of The Reproducibility Project: Psychology, led by Brian Nosek of the University of Virginia.
The eye-opening results don’t necessarily mean that those original findings were incorrect or that the scientific process is flawed. When one study finds an effect that a second study can’t replicate, there are several possible reasons, says co-author Cody Christopherson of Southern Oregon University. Study A’s result may be false, or Study B’s results may be false—or there may be some subtle differences in the way the two studies were conducted that impacted the results.
“This project is not evidence that anything is broken. Rather, it’s an example of science doing what science does,” says Christopherson. “It’s impossible to be wrong in a final sense in science. You have to be temporarily wrong, perhaps many times, before you are ever right.”
Across the sciences, research is considered reproducible when an independent team can conduct a published experiment, following the original methods as closely as possible, and get the same results. It’s one key part of the process for building evidence to support theories. Even today, 100 years after Albert Einstein presented his general theory of relativity, scientists regularly repeat tests of its predictions and look for cases where his famous description of gravity does not apply.
“Scientific evidence does not rely on trusting the authority of the person who made the discovery,” team member Angela Attwood, a psychology professor at the University of Bristol, said in a statement “Rather, credibility accumulates through independent replication and elaboration of the ideas and evidence.”
Digital Journal described the replication effort this way (http://www.digitaljournal.com/science/why-some-science-is-actually-bad-science/article/455538 ):
Good science needs to be repeatable. Sometimes claims made in journals cannot be replicated. One of the reasons for publishing science papers is so another qualified scientist can replicate the research. The experimental claims made don’t always stack up. One group from Stanford University recently attempted to a reproduce the findings of 100 psychology papers. They only managed to achieve similar results for 39 of the studies, meaning that around 60 percent of the described studies were so poorly constructed they could not be proven.
It’s a growing scandal in science, that a lot of science—particularly with regard to controversial political issues—isn’t actually science. About which, more in a subsequent column.