If Impact Factors are not a good proxy for assessing the performance of publishing authors then what are they good for? If you plot out the number of citations received by a given journal over a specific period of time then you end up with a distribution that is severely skewed to the right. It is easy to imagine that the very high values are outliers that can’t be captured by any more formal description. Furthermore, if you compare the citation distributions for different journals then more sources of variation begin to appear. Journals with high impact factors are clearly humped, whereas journals with low ratings begin to look more like a simple exponential distribution with a lot of articles never receiving any cites at all. A figure from Dwight J. Kravitz and Chris I. Baker’s paper in Front. Comput. Neurosci. 2011 shows you what I am getting at here.

Citation plots

A couple of months ago when I was working in Brazil, I had some spare time between classes, and had access to a range of information resources, including Scopus, I wondered if there was a way of normalising these citation distributions. I tried binning and ranking the data as percentiles so that the extent of the y-axes would be comparable but this just left me with a curve that looked like a negative exponential, so I tried logging it to see if I got a straight line. Well almost! The curve curled up at the high end and curled down at the low end. But in between it looked fairly straight. Curiosity piqued, I decided to compare a group of life science journals with widely differing Impact Factors, publication volumes and peer-review standards. What I ended up with for a sample of journals with widely differing Impact Factors and editorial policies is shown in the next graph. [Each curve represents the journal's annual output of research papers (not reviews) ranked and sampled 150 times.]

Journals

To my amazement each of the curves I got was broadly similar – the main difference being the positioning of the curves relative to the logged y-axis.

Why should I not have been surprised? Lotka’s Law, formulated in 1926, states that the number of authors making r contributions is about 1/r^a of those making one contribution, where a nearly always equals two. In other words, the number of authors publishing a certain number of articles is a fixed ratio to the number of authors publishing a single article. As the number of articles published increases, authors producing that many publications become less frequent. There are 1/4 as many authors publishing two articles within a specified time period as there are single-publication authors, 1/9 as many publishing three articles, 1/16 as many publishing four articles, etc. Though the law itself covers many disciplines, the actual ratios involved (as a function of ‘a’) are very discipline-specific. Subsequent work has shown that empirically the formula A(N+1-r)^b/r^a (where A is a scaling factor) is a better fit as shown in the next and finalgraph. So all of these different citation distributions belong to a single family of curves differentiated by a single parameter, the average number of citations per percentile.

Lotka

What does this all mean? Two things, I think. First that journals do select and publish a sample of articles that reflect a defined range of citation values. Second, the actual citation numbers are generated according to a random process post-publication. In other words, the Impact Factor is a reflection of the selectivity of the peer review process, but the actual number of citations generated per article has no close relation to quality.

It is also interesting that PLoS ONE has the same profile as other low Impact journals such as Brain Research. Perhaps the new open access mega-journal isn’t so different from the old subscription-style mega-journals such as Brain Research and BBA after all?