Skip to main content
It looks like you're using Internet Explorer 11 or older. This website works best with modern browsers such as the latest versions of Chrome, Firefox, Safari, and Edge. If you continue with this browser, you may see unexpected results.
Internet Explorer 11 Will No Longer Be Supported as of November 20, 2020. Read More...

Scholarly Communication Services - Research Impact: Essentials

Journal / Research Impact

Journal, author, and article impact utilizes bibliometrics, the application of quantitative analysis and statistics to measure specific qualities of a publication.  Bibliometrics is frequently used to identify influential scholars, works, and publications. Scholars can use these to select influential journals to read and to select where to publish their work.  Institutions may use them to evaluate researcher/ scholar productivity, especially for hiring, performance reviews, or promotion and tenure.

Frequently used bibliometrics include h-index, Journal Impact Factor, SCImago Journal Rank (SJR), Source Normalized Impact per Paper, and more.

Despite criticisms (See 'Criticisms Against Bibliometrics), bibliometrics and other measures of impact are widely used in academia.  Although scholars and authors should be aware of these issues, they should also consider blbliometrics to identify publications of interest for the following reasons:

  • Identify seminal journals:  bibliometrics indicate journals and other publications that are considered high-impact and influential within a field or discipline.
  • Evaluation purposes:  universities frequently consider bibliometrics in measuring researcher productivity for annual reviews and also in promotion and tenure.
  • Grants and funding:  grantors and other funding agencies may use bibliometrics to evaluate researcher potential and productivity.
  • Institutional ranking:  some organizations rank universities and institutions by their output via bibliometrics (e.g., CWTS Leiden Ranking of universities for their scientific output). 

Altmetrics largely emerged as a complement and alternative to typical bibliometric indicators.  Instead of statistically analyzing citation counts and associations between publications, altmetrics measure captures, mentions, and other types of interactions on the web to demonstrate interest in various works.  See the Altmetrics LibGuide for more information and how to determine altmetrics.

Also in This Guide

Types of Metrics Explained

Example of a Journal

Journal-Level Metrics are bibliometrics that give a value (such as a score or a rank) to a journal. See this chart for a summary of the 5 most common journal-level metrics:

See more on these frequently used journal-level metrics:

Journal Impact Factor (JIF)

Eigenfactor

CiteScore

SCImago Journal Rank

Source Normalization Impact Per Paper (SNIP)

Authors

Author-level metrics are bibliographic measurements that capture the productivity and cumulative impact of the output of individual authors, researchers, and scholars.  

The h-index, proposed by Jorge E. Hirsch (2005), is a flexible, widely-used citation metric applicable to any set of citable documents.  It is a composite measure of the number of articles published by an author (productivity) and the number of citations to those publications (impact).

Various indexes (e.g., Web of Science) can calculate an h-index, but are likely to produce a different h for the same scholar, since databases vary in content and coverage. 

Limitations of the h-index

  • Multiple authors of a publication each receive one citation count no matter the order of the author list
  • Can be manipulated via self-citations
  • A score only takes into account the work located within a database
  • A work's first citation can take years to materialize
  • Does not account for citation of highly-cited papers over time

Find h-index

Article-Level Metrics

Article-level metrics (ALMs) collect a variety of data points about an article that are used to measure its impact, describe the way it had been integrated into a body of knowledge (socialization), and how soon it was used or discussed after its publication (immediacy).  ALMs typically capture the following information on an article (Tannanbaum, 2011):

  • Usage: the number of times an article has been viewed, access,ed or downloaded, along with its supplemental data.
  • Captures: how of an article was bookmarked in a citation tool (e.g., Mendeley), shared, or recommended.
  • Mentions:  the frequency of an article being discussed in blogs, Wikipedia, or into more public venues (e.g., news).
  • Social Media: the amount of likes, tweets, shares, or other types of activity in various social media platforms; this often shows an article's immediacy.
  • Citations: the number of times an article is cited in published literature.

While number of downloads is also known as an alternative metric ("altmetric,” or usage data), article-level metrics incorporate altmetrics, thus distinguishing the two types of measures; in other words, altmetrics are a type of article-level metric.

Source: Tannanbaum, G. (2013).  Article-level metrics: A SPARC primer.  SPARC.  https://sparcopen.org/wp-content/uploads/2016/01/SPARC-ALM-Primer.pdf

Altmetrics

Altmetrics are defined as metrics and qualitative data that can be used in addition to traditional impact factors that describe a work's impact See the Altmetrics LibGuide for specific types of altmetrics and what they capture.

What is a good impact number?

It depends.  One shortcoming of bibliometrics is the numbers themselves are typically not normalized for field differences.  Various disciplines measure their publications differently from one another:

Highest Ranking Journals:
Field:  Library & Information Science 
International Journal of Information Management
Established 1980.  5.063 (JIF), 4,885 total cites. 

Field:  Medicine, General & Internal
New England Journal of Medicine
Established 1828.  70.870 (JIF), 344,591 total cites.

Also, the length of time a publication has been in existence and the number of its articles cited influence its bibliometric indicators; publications that are long-established tend to be favored.  In summary, a standardized ranking or scale that identifies "high impact" for all disciplines does not exist.

How can I determine impact factor or metric ranges?

1.  Keep in mind that not all scholarly journals are included in indexes like Scopus or Web of Science; those that are not included typically do not have journal metrics.

2.  Use tools such as Scopus Preview (CiteScore) and others like it to identify metrics; see the LibGuide, Find Impact Factors.

3.  Comparisons.  Comparing journals with others in a field or discipline is not an exact science; disciplines often overlap in what they study, and their boundaries can be unclear.  However, comparing journal metrics within a discipline can be used for exploratory purposes.  For this example, a geoscientist can get an idea of CiteScore ranges for journals in his or her field (with 31.07 being the highest to 3.71 being the lowest in 2018), and possibly mention this range in a review or for a P&T bid.

Citation Tracking & Analysis

Works Cited

Citation counts are the number of times an article has been cited in other works.  Although citation counts typically measure the degree to which a particular article is useful to other researchers in support of their work, these metrics are not a measure of the quality of a cited work, since a work can be cited for negative reasons (e.g., refutations or retractions).

Additionally, citation counts are highly dependent upon particular disciplines and the number of researchers in them.  For example, more researchers work in neuroscience than in philosophy; as such, more papers are published in neuroscience than in philosophy. Citation counts are also dependent on whether or not a discipline tends to cite many works in their publications (known as citation density).  

Where can I find citation counts?  They can be found through the following ways:

  • Journal Websites: Article-Level.   Publishers may provide citation counts of an article on its web page. 

Example:  The article, Poverty and Covid-19: Rates of Incidence and Deaths in the United States During the First 10 Weeks of the Pandemic, was published in Frontiers in Sociology. Its article page provides a link to article impact which includes a citation count. 

Citation tracking, or citation analysis is an important tool used to trace scholarly research, measure impact, and inform tenure and funding decisions. The impact of an article is evaluated by counting the number of times other authors cite it in their work. Researchers do citation analysis for several reasons:

  • find out how much impact a particular article has had, by showing which other authors have cited the article in their own paper
  • find out how much impact a particular author has had by looking at the frequency and number of his/her total citations
  • discover more about the development of a field or topic (by reading the papers that cite a seminal work in that area)

The output from citation studies is often the only way that non-specialists in governments and funding agencies, or even those in different scientific disciplines, can judge the importance of a piece of scientific research (Johns Hopkins University Library Guide, 2018)

Books in the Library

Criticisms Against Bibliometrics

Caution

Experts in academia and publishing have the following criticisms against bibliometrics:

Change in its Intended Use.  The discussion of bibliometrics originated in the late 1950s by scientists who wanted to explore citation networks and shorten the time needed to find relevant articles, but now many organizations and institutions use bibliometrics to evaluate the productivity of researchers and scholars.

Its Use Among Disciplines.  The use of bibliometrics has been viewed as a 'one size fits all' method of evaluation for various disciplines that do not rate their own work using the same means.  Some bibliometrics apply to some disciplines because of their publishing practices and norms, but not to others.  

Methodology.  The methods and formulas used in some bibliometrics are not transparent or available to the public.  Experts have shown evidence that some bibliometric companies omitted publications by rival publishers from their lists, and assert some companies provided questionable explanations for changes made in their methodologies.  

Various organizations and publications (e.g., PLoS) support professional statements such as The San Francisco Declaration of Research Assessment (DORA) that call for institutions and organizations to revisit the their use of bibliometrics for evaluative purposes.   See the links below to gain a comprehensive understanding of these points: