My colleagues face a trade-off between evaluating a researcher by actually investing the time to read some of their papers, versus relying to varying degrees on journal metrics and spending time on other activities.
DORA, the Declaration on Research Assessment that has been signed by hundreds of universities and probably tens of thousands of researchers, declares that researchers should not be evaluated based on any kind of journal ranking.
My university hasn’t signed it, but I support DORA’s general principle that we have relied too much on certain metrics, creating perverse incentives. The Leiden Manifesto is a good alternative to DORA as it is in the same spirit but doesn’t go as far as completely banning any use of journal rank.
One reason I don’t like journal rank is that it perpetuates the reproducibility crisis: highly-selective journals often select based on surprisingness of the result as much as rigor, and surprising results often don’t replicate. Actually, fewer than 50% of psychology findings seem to replicate on a good-faith, highly-powered effort to replicate, so reinforcing the status quo (journal rank) is a big problem for the credibility of our field.
Encouraging researchers to aim for more prestigious journals is good in the short run for those seeking to get hired or promoted on the currently-dominant criteria, but I think this is bad for the social sciences in the long term.
I’ve been involved in efforts in mainstream prestige journals to raise the expectations for rigor, and the slow pace has frustrated me. Newer, less-prestigious journals have been much faster to adopt policies designed to improve the credibility of our field. This is one reason that a couple weeks ago, I moved on from Advances in Methods and Practices in Psychological Science, a journal whose broad policies are constrained by the Association for Psychological Science, to join Meta-psychology as an associate editor (I may write about Meta-psychology more in a future post).
Given the enormous advantages due to inertia and accumulated resources (in the form of money, human capital, or both) that older journals have simply by being old (the Matthew effect), not to mention their ability to fleece our universities with subscription fees we can no longer afford, I hate to see the journal hierarchy reinforced by the use of a journal ranking driven by the accumulated effects of replication-crisis creating practices.
By any measure of quality of their actual processes, as well as their contribution to the public good and value for money, conventional journal rankings under-rate newer zero-fee open access journals such as the Journal of Numerical Cognition, Meta-psychology, and very likely others I am not familiar with so can’t specifically vouch for. (See the Free Journal Network for a list of zero-fee open access journals, and our Psychology in Open Access site for some resources for starting such a journal.)
Above, I emphasized the “conventional” in conventional rankings because there are now, finally, actual rankings of quality of journal processes rather than rankings based on citations that seem to not be correlated at all with outcomes like replicability. One such is the Transparency and Openness Promotion (TOP) ranking.
But no one ranking will ever lead us to where we want the system of scholarship to be. Research is not tennis.
Research has many different purposes, so ordering journals on a single dimension is a mistake, even within a field or content area. Journals that produce research that may be super-important for people in sub-Saharan Africa may be of only marginal interest to people in the UK or Australia. A study that to basic researchers is fundamentally flawed due to a confound that prevents strong causal inference may be just what some policy-makers need who must act before waiting for a more perfect study. To avoid the academic cluster-fuck of the last few decades, we need to start acting like we believe this and support a more diverse universe of journals and other outlets.