A simplified picture of the scientific research process would include a cycle of events beginning with the identification of a new idea or hypothesis (A “known unknown” after Donald Rumsfeld’s definition) derived from a gap analysis of the literature, followed by competitive peer review of the corresponding research proposal and the release of funding and other resources. The conclusions drawn from the research are then divulged to the scientific community after a further stage of peer review directed at assessing the validity of the methodology and establishing the anticipated impact of the work. Thus, eventually, the new data updates the consensus view (the “known knowns”) and the cycle would be repeated.
Described in this way, several weaknesses and inefficiencies can be identified by anyone familiar with the way the process currently works. For example, if a funding agency has commissioned a piece of research, why should a publisher be allowed to impede the communication of its findings if the authors work is methodologically plausible? Isn’t it important for the community to be made aware of hypotheses that are wrong as well as those that are correct?
It is true that the hierarchical filtration of papers by likely impact leads to a relatively stable ecosystem of journals based upon citation statistics, but this second round of peer review does little to help define the consensus view in a particular field. The citation of individual articles in Nature, for example, ranges over many orders of magnitude, and a significant proportion of results published in high impact titles turn out to be biased or just methodologically flawed. [See recent Blog post.] To be able to identify the really important “known unknowns” requires access to the relevant results generated by all research, so that the conclusions can be double-checked and their importance assessed over time as a consensus view develops.
The purpose of publishing (and its underlying business model), the process of peer review and the assessment of the value of individual pieces of research (and the effective return on investment made by the funders) are all under pressure to evolve. Future articles on this blog will take a look at some of the drivers of these changes and try to answer the question “Has scientific publishing really gone ‘Boink’, or has nothing fundamentally changed…?”.