Going Against The Grain Is Hard December 21, 2009Posted by gordonwatts in science.
I follow Cliff Mass’ Seattle weather blog. It is an excellent resource for explaining the various weirdness that is Seattle/NW weather (we have mountains and water all around us). He has a couple of themes that come up repeatedly in his blog – getting a new costal radar installation, improving math in the schools (why he started the blog, apparently). Recently, he had an interesting posting about the climategate emails. He starts off saying, as you might expect, that nothing that has happened calls into question the science behind climate change and evolution. But that isn’t want caught my eye. It was the following:
Are scientists human and sometimes doing things out based on human emotion or group think? Yes.
and then a bit later:
I am known as somewhat of a skeptic regarding global warming effects in the NW–although I do believe that greenhouse gases are a serious problem in the long-run. A group of us noted that the snowpack in the Cascades was NOT rapidly melting away, in contrast to some publications by some local climate scientists and publicized by Mayor Nickels.
We then wrote a paper on the subject (the main contributor being Mark Stoelinga) and submitted it to the Journal of Climate. I have published a lot of papers in my life (roughly 100) and I never had problems like we had with this paper. Very biased associate editor and some reviewers.
First, I don’t think I need to do much to convince you the first statement is correct. We, as scientists, like to think we are totally driven by the data – we just go wherever it leads us. I’d like to think this is mostly true. But, lets face it, science is hard. We are always operating at the edge of our understanding – that is the very point of the what we are dong. If the answers were easy, they would be boring (to us) and we would move on. In order to make any progress we have to make some assumptions and then test them. The tests are often not conclusive – they might say something like “well, there is a 35% chance that assumption is right” or something like that. Lets say that is the best assumption we have. Then we make progress based on it until perhaps a better microscope or similar comes online or we otherwise get a new data point that validates or contradicts the assumption.
Deciding what is a good assumption – especially when the data does not give a clear answer – is something of an art. A very human art. Take the current state of my field – it is all about the Standard Model. We know it is broken, but we don’t have data that provides a clear direction to fixing it. To make progress we have all decided it is the theory and we are madly testing every bit of it we can. So far the thing refuses to bend under any experiment we do. Sure, there are some internal self-consistency problems and the Standard Model does not contain anything that looks like dark matter – but neither of those things give us good guidance for a future model. We have been using the Standard Model for 35 some-odd years now. The stupid thing works.
Now, lets say you make a measurement that spot on confirms the Standard Model. It is a new measurement, or perhaps an old one, but to a much higher degree of precision. Both of these are interesting results and deserve publication. Since you matched the results of the Standard Model people will start out believing you. Internal review and peer review will look at your results carefully, certainly, but if you’ve done your job it won’t be hard to get the paper published.
Now, instead, lets say you have a new measurement that is substantially different from the Standard Model. Something you measured in the lab that really breaks it. This is the big one. Internally, you bet you look at it more carefully. It gets reviewed – lots of people pour over it. It takes much longer to get released. And when it comes for peer review everyone starts out from the “Really? They see something? I’m not sure – first time in 35 years? Are we sure they understand their errors?” As you might imagine, it starts from there and ends up taking a lot longer for a result like this to come out. Heck, if this result is wrong – well, you can imagine how embarrassing it will be for all involved (the people doing the analysis, the experiment, the lab, the journal, the peer reviewers, etc.).
As a result it is a natural by-product of the way we do science that a paper that goes against the grain will take longer and have more trouble than a paper that goes with the grain does.
That isn’t to excuse some of the difficulty it sounds like Cliff’s paper had – that sounds worse than the normal difficulty that one should have. And the visibly public stakes are much higher in the case of climate change, obviously – which makes me think that one would have even more difficulty – as Cliff and his collaborators did.
Keep in mind those brakes are there for a reason. If we have 35 years behind of us of one thing working and suddenly someone proposes a different mechanism or a result that makes no sense in this 35 year old framework, then the first thing you have to do is make sure that result makes sense – and it does get extra scrutiny.
BTW, you’ll note this means that results that are different from expected get a lot more scrutiny than results that match expectations. This doesn’t feel like – results should get the same scrutiny weather or not they are expected, right? Human nature, however, gets in the way. This is why Blind Analyses are performed (for example, the recent CDMS result that generated so much interest). The idea is you hide the final result from yourself until the last minute – and promise to report exactly what you saw when you unblind.