Journal, author, and article impact utilizes bibliometrics, the application of quantitative analysis and statistics to measure specific qualities of a publication. Bibliometrics are frequently used to identify influential scholars, works, and publications. Scholars can use these to select influential journals to read and to select where to publish their work. Institutions may use them to evaluate researcher/ scholar productivity, especially for hiring, performance reviews, or promotion and tenure.
Frequently used bibliometrics include h-index, Journal Impact Factor, SCImago Journal Rank (SJR), Source Normalized Impact per Paper, and more.
Despite criticisms (See 'Criticisms Against Bibliometrics), bibliometrics and other measures of impact are widely used in academia. Although scholars and authors should be aware of these issues, they should also consider blbliometrics to identify publications of interest for the following reasons:
Altmetrics largely emerged as a complement and alternative to typical bibliometric indicators. Instead of statistically analyzing citation counts and associations between publications, altmetrics measure captures, mentions, and other types of interactions on the web to demonstrate interest in various works. See the Altmetrics Research Guide for more information and how to determine altmetrics.
Journal-level metrics are bibliographics that give a value such as a score or rank to a journal. Here are the most commonly known journal-level metrics:
Article-level metrics (ALMs) collect a variety of data points about an article that are used to measure its impact, describe the way it had been integrated into a body of knowledge (socialization), and how soon it was used or discussed after its publication (immediacy). ALMs typically capture the following information on an article (Tannanbaum, 2011):
While number of downloads is also known as an alternative metric ("altmetric,” or usage data), article-level metrics incorporate altmetrics, thus distinguishing the two types of measures; in other words, altmetrics are a type of article-level metric.
Source: Tannanbaum, G. (2013). Article-level metrics: A SPARC primer. SPARC. https://sparcopen.org/wp-content/uploads/2016/01/SPARC-ALM-Primer.pdf
Author-level metrics are bibliographic measurements that capture the productivity and cumulative impact of the output of individual authors, researchers, and scholars.
The h-index, proposed by Jorge E. Hirsch (2005), is a flexible, widely-used citation metric applicable to any set of citable documents. It is a composite measure of the number of articles published by an author (productivity) and the number of citations to those publications (impact).
Various indexes (e.g., Web of Science) can calculate an h-index, but are likely to produce a different h for the same scholar, since databases vary in content and coverage.
Limitations of the h-index
Altmetrics are defined as metrics and qualitative data that can be used in addition to traditional impact factors that describe a work's impact. See the Altmetrics Research Guide for specific types of alternative metrics and what they capture.
Experts in academia and publishing have the following criticisms against bibliometrics:
Change in its Intended Use. The discussion of bibliometrics originated in the late 1950s by scientists who wanted to explore citation networks and shorten the time needed to find relevant articles, but now many organizations and institutions use bibliometrics to evaluate the productivity of researchers and scholars.
Its Use Among Disciplines. The use of bibliometrics has been viewed as a 'one size fits all' method of evaluation for various disciplines that do not rate their own work using the same means. Some bibliometrics apply to some disciplines because of their publishing practices and norms, but not to others.
Methodology. The methods and formulas used in some bibliometrics are not transparent or available to the public. Experts have shown evidence that some bibliometric companies omitted publications by rival publishers from their lists, and assert some companies provided questionable explanations for changes made in their methodologies.
Various organizations and publications (e.g., PLoS) support professional statements such as The San Francisco Declaration of Research Assessment (DORA) that call for institutions and organizations to revisit the their use of bibliometrics for evaluative purposes. See the links below to gain a comprehensive understanding of these points: