Skip to Main Content
We are working to upgrade the research experience by making ongoing improvements to our Research Guides.
You may encounter changes in the look and feel of the Research Guides website along with structural changes to our existing guides. If you have any questions or concerns about this process please let us know.

Journal and Research Impact

About Journal and Research Impact

Journal, author, and article impact utilizes bibliometrics, the application of quantitative analysis and statistics to measure specific qualities of a publication.  Bibliometrics are frequently used to identify influential scholars, works, and publications. Scholars can use these to select influential journals to read and to select where to publish their work.  Institutions may use them to evaluate researcher/ scholar productivity, especially for hiring, performance reviews, or promotion and tenure.

Frequently used bibliometrics include h-index, Journal Impact Factor, SCImago Journal Rank (SJR), Source Normalized Impact per Paper, and more.

Despite criticisms (See 'Criticisms Against Bibliometrics), bibliometrics and other measures of impact are widely used in academia.  Although scholars and authors should be aware of these issues, they should also consider blbliometrics to identify publications of interest for the following reasons:

  • Identify seminal journals:  bibliometrics indicate journals and other publications that are considered high-impact and influential within a field or discipline.
  • Evaluation purposes:  universities frequently consider bibliometrics in measuring researcher productivity for annual reviews and also in promotion and tenure.
  • Grants and funding:  grantors and other funding agencies may use bibliometrics to evaluate researcher potential and productivity.
  • Institutional ranking:  some organizations rank universities and institutions by their output via bibliometrics (e.g., CWTS Leiden Ranking of universities for their scientific output). 

Altmetrics largely emerged as a complement and alternative to typical bibliometric indicators.  Instead of statistically analyzing citation counts and associations between publications, altmetrics measure captures, mentions, and other types of interactions on the web to demonstrate interest in various works.  See the Altmetrics Research Guide for more information and how to determine altmetrics.

Web Sites

Levels of Bibliometrics

Journal-level metrics are bibliographics that give a value such as a score or rank to a journal.  Here are the most commonly known journal-level metrics:

  • Journal Impact Factor (JIF), by Clarivate Analytics
  • CiteScore, by Elsevier
  • Eigenfactor, by Eigenfactor.Org
  • SciMago
  • Source Normalization Impact Per Paper (SNIP)

Article-Level Metrics

Article-level metrics (ALMs) collect a variety of data points about an article that are used to measure its impact, describe the way it had been integrated into a body of knowledge (socialization), and how soon it was used or discussed after its publication (immediacy).  ALMs typically capture the following information on an article (Tannanbaum, 2011):

  • Usage: the number of times an article has been viewed, access,ed or downloaded, along with its supplemental data.
  • Captures: how of an article was bookmarked in a citation tool (e.g., Mendeley), shared, or recommended.
  • Mentions:  the frequency of an article being discussed in blogs, Wikipedia, or into more public venues (e.g., news).
  • Social Media: the amount of likes, tweets, shares, or other types of activity in various social media platforms; this often shows an article's immediacy.
  • Citations: the number of times an article is cited in published literature.

While number of downloads is also known as an alternative metric ("altmetric,” or usage data), article-level metrics incorporate altmetrics, thus distinguishing the two types of measures; in other words, altmetrics are a type of article-level metric.

Source: Tannanbaum, G. (2013).  Article-level metrics: A SPARC primer.  SPARC.


Author-level metrics are bibliographic measurements that capture the productivity and cumulative impact of the output of individual authors, researchers, and scholars.  

The h-index, proposed by Jorge E. Hirsch (2005), is a flexible, widely-used citation metric applicable to any set of citable documents.  It is a composite measure of the number of articles published by an author (productivity) and the number of citations to those publications (impact).

Various indexes (e.g., Web of Science) can calculate an h-index, but are likely to produce a different h for the same scholar, since databases vary in content and coverage. 

Limitations of the h-index

  • Multiple authors of a publication each receive one citation count no matter the order of the author list
  • Can be manipulated via self-citations
  • A score only takes into account the work located within a database
  • A work's first citation can take years to materialize
  • Does not account for citation of highly-cited papers over time

Find h-index


Altmetrics are defined as metrics and qualitative data that can be used in addition to traditional impact factors that describe a work's impact. See the Altmetrics Research Guide for specific types of alternative metrics and what they capture.

Books in the Library

Criticisms of Bibliometrics


Experts in academia and publishing have the following criticisms against bibliometrics:

Change in its Intended Use.  The discussion of bibliometrics originated in the late 1950s by scientists who wanted to explore citation networks and shorten the time needed to find relevant articles, but now many organizations and institutions use bibliometrics to evaluate the productivity of researchers and scholars.

Its Use Among Disciplines.  The use of bibliometrics has been viewed as a 'one size fits all' method of evaluation for various disciplines that do not rate their own work using the same means.  Some bibliometrics apply to some disciplines because of their publishing practices and norms, but not to others.  

Methodology.  The methods and formulas used in some bibliometrics are not transparent or available to the public.  Experts have shown evidence that some bibliometric companies omitted publications by rival publishers from their lists, and assert some companies provided questionable explanations for changes made in their methodologies.  

Various organizations and publications (e.g., PLoS) support professional statements such as The San Francisco Declaration of Research Assessment (DORA) that call for institutions and organizations to revisit the their use of bibliometrics for evaluative purposes.   See the links below to gain a comprehensive understanding of these points:

Last updated on Feb 21, 2024 3:49 PM