This is the text of a talk given by Rob Johnson, Director, at the Westminster Higher Education Forum event Next steps for protecting research integrity in the UK (9 September 2019
I’d like to tell you the story of two papers, published in perhaps the two most prestigious academic journals in the world, Nature and Science.
In May 2017, Nature published a modelling paper by Hamish Pritchard of the British Antarctic Survey which purported to show that during droughts, glaciers become the largest supplier of water to some of Asia’s major river basins, and provided insights into the implications of climate change for the region.
The author carefully referenced his sources and was transparent as to the methodology he had followed. This allowed other climate scientists to spot an error in the way a previously published estimate of glacial mass imbalance was incorporated into the calculations. This was highlighted in an Editorial Expression of Concern, and the paper was retracted by the author within 10 months of its initial publication.
My second paper has a different story. In October 2016, Science published a paper by S. N. Byrareddy of Emory University and colleagues on the simian immunodeficiency virus, SIV, an HIV-like virus that can affect monkeys and apes. The paper raised hopes of a potential treatment for HIV, and remained an accepted part of the scholarly record up until March this year, when Science issued an Editorial Expression of Concern. Three studies which had attempted to replicate the work, and a clinical trial on HIV patients had all failed to reproduce the results of the initial study. It then emerged that one of the co-authors had used a slightly different strain of the virus than that reported in the study, but did not communicate this to his colleagues, nor was it reflected in the paper.
Last Friday, almost three years after the paper’s initial publication, and 6 months after the expression of concern, Science’s editor announced that they were issuing an official correction to the study, but not retracting it.
What can we learn from these two examples? I would say at the outset, we have to be careful about drawing any generalisable conclusions. These are two different sets of issues, in different fields of science, with different outcomes. But I think they illustrate some important points.
Firstly, they show the importance of the publishing process as an arbiter of what is and isn’t acceptable scientific behaviour.
Secondly, the Science case in particular shows that those judgements are frequently difficult to make, and highly nuanced. As one commentator explained in relation to the correction of the Science article, “The fact that there was both misreporting and irreproducible results does not necessarily mean that there is an infraction of scientific integrity that warrants retraction”.
Thirdly, they raise important questions about the value of openness. Pritchard’s article was sufficiently transparent that an error could be swiftly identified, and the paper retracted. The article by Byareddy et al, by all accounts inadvertently, obscured some critical aspects of the process, and the study’s validity remains unproven.
It’s too easy, however, to conclude that more openness and more transparency is the solution to these problems. We must be wary of what a recent editorial in Nature Physics termed ‘OA solutionism’: the idea that given the right technologies and institutional mandates, all of scientists’ problems can be solved by OA. This is equally true of the relationship between research integrity and open research.
One of my responsibilities last year was to support the work of the Open Research Data Task Force, whose role was to outline ambitions for open research data in the UK. If you read the Task Force’s final report, Realising the Potential, you will see that improving the rigour, validity and reproducibility of research is identified as a key benefit of the move to open research data.
Yet there was much discussion within the Task Force as to the implications of moving to open research data, and how best to balance openness with ethical, privacy, confidentiality and cost-benefit considerations. Ultimately, the Task Force chose not to recommend a policy of ‘open by default’ and concluded that the UK should be in step with the European Commission, and adopt the principle of ‘as open as possible, as closed as necessary’.
To conclude, the relationship between openness and integrity is complex, and one does not necessarily lead to the other. But I would also observe that changes to the way science is practiced and communicated are necessary, are possible, and are already happening. To illustrate that last point, on 29 May this year, Nature republished a retracted paper for the first time in its history. The title of this republished paper was ‘Asia’s shrinking glaciers protect large populations from drought stress’ and the author was Hamish Pritchard of the British Antarctic Survey.