− Kevin B Korb
Bad science comes in a number of varieties, at least including the following:
- Sloppy science. This might include poor experimental design, poor measurements, slovenly reasoning, insufficient power in one's tests, failure to blind experimenters or subjects, etc. Presumably, the intentions are right, but the execution is wrong.
- Pseudo-science. This is fake science. The fakery may be intentional or unintentional. For example, cultists may intentionally generate some large-scale fantasy, while their followers unsuspectingly take it seriously. If the pseudo-scientific methods employed have the look and feel of science, then this is due to simulation or accident, and not due to the proper employment of scientific methods. For Karl Popper, demarcating real from pseudo-science was a kind of mission. He proposed a "falsificationist" criterion: that theories which were (or could be) protected from any possible contrary evidence were non-scientific. Unfortunately, this could never quite be made to work; there are no logical limits to what can be defended, or not, since, as Quine put it, all of our ideas are tied together in a "Web of Belief" (Quine and Ullian, 1978). Still, Popper was certainly on to something: those, such as climate change deniers, who spin excuses and rationalizations no matter what the evidence may be good propagandists, but they are not good scientists.
- Cheats. This is also fake science, but most likely not with a view to promoting a false story about the world, but instead a false story about the researcher.
Ben Goldacre's book Bad Science (Fourth Estate, 2009) treats miscreants and violators of scientific method primarily in the first two categories. Being a journalist (and MD) he, perhaps naturally, focuses largely on the aberrations and violations perpetrated by journalists. On his account, they've done quite a lot of damage. For example, around 2005 there were repeated scandals in the UK concerning rampant MRSA in UK hospitals, but the findings were all traceable to a single lab, "the lab that always gives positive results". Apparently, journalists responded to that description by anticipatory salivation, rather than anxious palpitation. It's a ludicrous, and sad, story.
For newcomers to scientific or medical research, Goldacre's book is an entertaining, accessible introduction to a host of issues you will need to know about: experimental design, bias in statistics, cheating by pharmaceutical companies in research and in advertising, the silliness of homeopathy, how we fool ourselves into believing what we want to believe and what measures can be taken to minimize our own foolishness.
For those well versed in these kinds of issues, the book, while a good source of anecdotes, is just a little disappointing. It's important to provide accessible accounts of science and method, but Goldacre goes just a bit far in dumbing things down, in my opinion. Popular science writers should not be assuming that their readers are idiots. He proposes as his motto: "Things are a little more complicated than that". Indeed, they are. Still, on the whole, this is a good and positive contribution to the public understanding of science.
(17 Nov 2012) I think perhaps I was a bit too negative at the end of the note above. Goldacre's book can be seen as an extended plea for a more evidence-oriented treatment of science journalism and, in particular, as a protest against the view that science is just too complicated for ordinary folk to understand — a view which he rightly condemns for promoting appeals to authority for arbitrating scientific disputes, rather than appeals to evidence. The result is a serious dumbing down of public policy debates, including a tendency to portray all sides of a scientific dispute as having equal support, because all sides can call upon any number of "experts". This message certainly needs to be spread. The quality of public debate about topics that concern science is very poor indeed.