The most interesting thing about the public release yesterday of the first results from Berkeley’s Earth Surface Temperature (BEST) project is their timing. It is no surprise to me – nor to most people who have ever ‘got their hands dirty’ working with climate data – that the BEST curve closely follows the three other established global-land temperature curves. A dodgy temperature curve was never for me the significance of Climategate.
Much more interesting is the timing of this release. BEST have gone public with their results at the same time that have submitted their four scientific papers containing the methods and results for peer-review to the established journal Journal of Geophysical Research (JGR). Simultaneously, they posted these four manuscripts on their web-site, issued a media release and, for good measure, written an op-ed in the Wall Street Journal.
The worlds of climate science and climate politics certainly know about it! And many interested parties will, I am sure, now scrutinise those four manuscripts and offer their own views concerning their strengths and weaknesses. This is ‘extended peer review’ at work, to quote Jerry Ravetz (see also our Climategate essay from December 2009 ‘Show your working’).
So what does this do to the conventional journal peer-review process? Those asked to review these manuscripts for JGR will now conduct their personal reviews in the full knowledge of the parallel public review which is on-going. And unless they shut-off all their communication platforms for the duration they will hear and see what others’ judgements on the manuscripts are. Whether for better or worse it’s difficult to see how this will not change the (conventional) peer-review process.
This is rather similar to the situation with juries in court cases. Jurors are sealed-away from extraneous media-based interpretations, speculations and judgements while they decide on the verdict of their case. Scientists do not have the same obligations nor possibilities.
The BEST team didn’t have to play it this way, but they did. What interests me then is how this illustrates the changing nature of peer-review in ‘hot science’ (see also my article in Science as Culture on Kilimanjaro’s glaciers, Guy Callendar and Al Gore). What new name do we give this parallel form of peer review: it is certainly not conventional and it seems to me something more interesting than Ravetz’s extended peer review? And whatever it is, does it make for more (or less) legitimate public knowledge?
One interesting parallel I can think of is with Maarten Haijer’s recent idea of ‘authoritative governance’: governance which establishes its legitimacy and authority on the basis of how well it performs in pressurised and real-time media spaces.
What we are witnessing with BEST is scientific knowledge which is being judged on its ‘performative’ successes in an open society as much as it is being judged on conventional scientific norms of thoroughness, clarity and logic. Knowledge is here being made ‘in the open’ for all to see.
Where does conventional journal peer review for ‘hot science’ go after this?