In December 2016, Elsevier launched a new journal metric, CiteScore, that takes direct aim at the hegemony of the Impact Factor, a product of Clarivate Analytics (formerly part of Thomson Reuters.) The two companies already have competing bibliographical citation databases in Scopus (Elsevier) and the Web of Science (Clarivate).

The Impact Factor has had a long reign in academe. Beginning in 1975 as a byproduct of the Science Citation Index, it provided a unique, objective means of rating journals based on their citations and quickly became a standard measure of journal quality. When someone says they want a journal’s impact factor they really mean the Impact Factor from Journal Citation Reports (JCR) and nothing else. Because of this, the Impact Factor has come in for a lot of criticism over the years due to its inherent limitations, among them:

  • Since the Impact Factor is derived from journals indexed in the Web of Science, no other journals can have an Impact Factor.
  • Since the Impact Factor only looks at citations in the current year to articles in the previous two years, it only works well for disciplines in which rapid citation is the standard.
  • It doesn’t take into account disciplinary differences in expected numbers of citations.
  • There is no JCR for arts & humanities, therefore no Impact Factor for those journals.

To be fair, the JCR does also report a 5-year impact factor but it is an alternative, not the official Impact Factor, as are several other measures reported in the JCR such as the Immediacy Index, Cited- and Citing- Half-Life, and the Article Influence Score.

Other attempts have been made to offer alternatives to the Impact Factor that differ in methodology, such as the Eigenfactor Score, that uses the same journal source list as the JCR, and Scimago Journal Rankings and the SNIP (Source Normalized Impact per Paper), that uses the Scopus journal list.

Now, CiteScore has arrived to compete with the Impact Factor, luring users in with these benefits:

  • It is free to access on the Scopus Journal Metrics website (JCR is a paid subscription.)
  • It is calculated from the Scopus journal list, which is much larger than the Web of Science list and includes more social sciences and humanities journals.
  • It provides a 3-year citation window, rather than the 2-year window of the Impact Factor.

(There is also the CiteScore Tracker, where you can watch your journal’s score go up or down on a monthly basis. While this may interest some journal editors, I cannot recommend it for anyone else.)

As with the Impact Factor, CiteScore does not take into account disciplinary differences, though the website displays other metrics that do. Another factor that Elsevier touts as a benefit but has caused some criticism is that CiteScore includes citations to all documents in its calculation while the Impact Factor only includes citation to “citable” documents (e.g. articles, reviews, and meeting papers but not editorials, letters, or abstracts.) While this seems reasonable, it has meant that journals that include these other types of material get a lower score relative to journals that do not include it. Elsevier’s own journals have benefitted from this distinction, leading to the criticism.

So, where does this leave us when it comes to journal quality? We should remember that all these scores are just numbers. They’re interesting and tell us a little about citations. They don’t tell us anything about the journals that are not indexed in Scopus or Web of Science. A journal’s score doesn’t say anything about the quality of an individual article within the journal. The scores can be subject to gaming, though the producers eventually seem to catch on to that.

A lot of research is being produced and there is a lot of pressure on researchers to publish. Trying to force a large quantity of articles through the small funnel of Web of Science journals in pursuit of a high Impact Factor seems like it would just lead to a big delay in dissemination. Wouldn’t it be a better choice to expand the list of acceptable journals by entertaining some additional measures of impact? Or even take the journal itself out of the equation and just focus on the individual article and its impact?

For more information on journal metrics and links to sources, see the Journal Metrics section of the Research Impact guide at http://guides.osu.edu/c.php?g=608754&p=4233834 .