A few days ago, "Nature" published a paper titled "Twenty tips for interpreting scientific claims", by William Sutherland, David Spiegelhalter and Mark Burgman. The paper is described as dedicated to "helping non-scientists" in understanding scientific claims, especially in relation to the climate change problem. The authors claim that "the immediate priority is to improve policy-makers' understanding of the imperfect nature of science. The essential skills are to be able to intelligently interrogate experts and advisers, and to understand the quality, limitations and biases of evidence." (emphasis mine)
It is an appreciable effort, but I think it totally misses the point. The first problem is that the list is understandable (actually, obvious) for scientists, but not at all for non-scientists. Let me report one of these 20 points as an example
Significance is significant. Expressed as P, statistical significance is a measure of how likely a result is to occur by chance. Thus P = 0.01 means there is a 1-in-100 probability that what looks like an effect of the treatment could have occurred randomly, and in truth there was no effect at all. Typically, scientists report results as significant when the P-value of the test is less than 0.05 (1 in 20).
I don't know what kind of policy-makers you deal with, normally, but those I am acquainted with won't make it beyond the first point. But, apart from this problem, I think that the paper misses the essence of the debate on climate change. Many scientists still operate on the basis of the "information deficit model"; that is, they assume that people will make the right choices when correctly informed. Unfortunately, this is not the way the real world works.
As Dan Kahan has amply shown many times, in issues such as climate change people first take a position based on their pre-conceived ideas, then look for facts that support their position. We all know that science, as every human endeavor, is subjected to uncertainties but, given the way people behave, it is simply suicidal to emphasize "the imperfect nature of science" in the debate, as the authors do. It means giving ammunition to those who have been playing the uncertainty card against science. It means confusing those who have been honestly trying to understand the problem of climate change. It means forgetting that people don't just have a mind, but also a heart and that if you want to move them to action, then you must win their hearts and minds!
Think about that: would you tell to your sweetheart "I love you with 95% certainty"? Not even a scientist would.
Here is the list of the 20 points of the paper by Sutherland et al. The full article on Nature is here.
- Differences and chance cause variation.
- No measurement is exact.
- Bias is rife.
- Bigger is usually better for sample size.
- Correlation does not imply causation.
- Regression to the mean can mislead.
- Extrapolating beyond the data is risky.
- Beware the base-rate fallacy.
- Controls are important.
- Randomization avoids bias.
- Seek replication, not pseudoreplication.
- Scientists are human.
- Significance is significant.
- Separate no effect from non-significance.
- Effect size matters.
- Study relevance limits generalizations.
- Feelings influence risk perception.
- Dependencies change the risks.
- Data can be dredged or cherry picked.
- Extreme measurements may mislead.