A league table of journals

The Australian government is revising its research assessment system, and is in the process of setting up ERA, Excellence in Research for Australia. This new system was an early commitment of the Labor Government elected in November of last year, and is replacing the Research Quality Framework (RQF) which the previous Government had started to develop in 2006, and which was intended to carve up AU$600 million in block grant research funding. That system did not reach fruition, despite (and partly because of) being very costly. The new system is designed to benchmark Australian research better within an international context, and is – for the moment – not intended to lead to a ranking-based carve-up of the research funding pot, though that option has been left in for the future.

One of the first outputs of the new process is a single ranked list (which can be downloaded here) of over 21,000 journals, broken down into 157 subject groups classified according to the Australian and New Zealand Standard Research Classification. Each journal is graded on a 4-tier system, A* (top 5 %), A (next 15%), B (next 30%) and C (next 50%). The allocation of journals to these bands has been made by learned societies and disciplinary bodies. There has already been a lot of concern expressed by academics in Australia, arguing that the journals list is not sufficiently widely representative, is too Anglophone, will skew publication practices among academics, and is invidious because a percentage banding system ignores intrinsic quality for the sake of convenience. In an open letter to Senator Kim Carr, an international group of philosophers states ‘The problem is not that judgments of quality in research cannot currently be made, but rather that in disciplines like Philosophy, those standards cannot be given simple, mechanical, or quantitative expression. Publisher and journal rankings are no substitute for direct assessment of a scholar’s work by knowledgeable peers.’

Metrics used in the evaluation of research are often discounted by academics as being too crude, while administrators favour them for being objective and so capable of supporting resource decisions to advance policy objectives. The previous Australian government was influenced by the UK’s adoption of metrics of research excellence which has been credited with improving its economic performance. It would appear that governments are prepared to tolerate a degree of hand-wringing by academics and game-playing by their managers if it leads to increased global competitiveness. The nobler aims of the Academy have no compelling match for tables and rankings.

Journal rankings have been made over many years, though normally not in such a visible and comprehensive way as in a list of over 21,000 journals – a total which is larger than that of both Thomson Reuters’ ISI Web of Knowledge and Elsevier’s Scopus, each of which has around 15,000 journals. Indeed, Ulrich’s Periodicals Directory estimates that the total number of peer-reviewed journals is around 22,000, so this Australian list must be almost complete. What is perhaps more interesting, in light of the fact that both the Australian and UK governments want to make extensive research assessment exercises as lightweight as possible, is the Pareto Principle in this context. Thomson Reuters itself admits that ‘A core of 3,000 … journals accounts for about 75% of published articles and over 90% of cited articles’. So why consider 21,000?

It will be interesting to see whether the Australian list becomes a reference point for researchers, and is used internationally – or whether the criticisms increase. Without the justification from metrics of the citation impact factor, which governs the ISI ranking, there is a danger that a peer-reviewed list could be dismissed as immediately out-of-date. Small publishers could well protest at the injustice it represents to their up-and-coming titles which have not been graded A or A*. Larger publishers will surely seek to consolidate their top-ranked journals and invest in those with the potential to join them, possibly cutting others loose as they do so.

Thus rankings based on metrics could once again lead to the research community, for lack of sufficiently strong better judgement, behaving in a Jekyll and Hyde manner, as characterised by Jean-Claude Guédon in his 2001 essay In Oldenburg’s Long Shadow: Librarians, Research Scientists, Publishers, and the Control of Scientific Publishing, conforming to higher aspirations one moment, and the next giving in to the grubbier demands of gamesmanship within a system which is largely out of their control.

(And, just in case you were wondering, the A* Library & Information Studies journals you want to be published in are, according to this list: Journal of Documentation, Library Quarterly, Library Trends, Management Science, Annual Review of Information Science and Technology, Journal of Information Science, MIS Quarterly, Library & Information Science Research, Information Systems Research, Information Research-An International Electronic Journal, School Library Media Research and Journal of the American Society for Information Science and Technology).

One Comment on “A league table of journals”

Comments are closed.