Journal / Research Impact |
Journal, author, and article impact utilizes bibliometrics, the application of quantitative analysis and statistics to measure specific qualities of a publication. Bibliometrics is frequently used to identify influential scholars, works, and publications. Scholars can use these to select influential journals to read and to select where to publish their work. Institutions may use them to evaluate researcher/ scholar productivity, especially for hiring, performance reviews, or promotion and tenure.
Frequently used bibliometrics include h-index, Journal Impact Factor, SCImago Journal Rank (SJR), Source Normalized Impact per Paper, and more.
Despite criticisms (See 'Criticisms Against Bibliometrics), bibliometrics and other measures of impact are widely used in academia. Although scholars and authors should be aware of these issues, they should also consider blbliometrics to identify publications of interest for the following reasons:
Altmetrics largely emerged as a complement and alternative to typical bibliometric indicators. Instead of statistically analyzing citation counts and associations between publications, altmetrics measure captures, mentions, and other types of interactions on the web to demonstrate interest in various works. See the Altmetrics LibGuide for more information and how to determine altmetrics.
Also in This Guide |
Types of Metrics Explained |
Journal-Level Metrics are bibliometrics that give a value (such as a score or a rank) to a journal. See this chart for a summary of the 5 most common journal-level metrics:
See more on these frequently used journal-level metrics:
Journal Impact Factor (JIF)
Eigenfactor
CiteScore
SCImago Journal Rank
Source Normalization Impact Per Paper (SNIP)
Author-level metrics are bibliographic measurements that capture the productivity and cumulative impact of the output of individual authors, researchers, and scholars.
The h-index, proposed by Jorge E. Hirsch (2005), is a flexible, widely-used citation metric applicable to any set of citable documents. It is a composite measure of the number of articles published by an author (productivity) and the number of citations to those publications (impact).
Various indexes (e.g., Web of Science) can calculate an h-index, but are likely to produce a different h for the same scholar, since databases vary in content and coverage.
Limitations of the h-index
Find h-index
Article-level metrics (ALMs) collect a variety of data points about an article that are used to measure its impact, describe the way it had been integrated into a body of knowledge (socialization), and how soon it was used or discussed after its publication (immediacy). ALMs typically capture the following information on an article (Tannanbaum, 2011):
While number of downloads is also known as an alternative metric ("altmetric,” or usage data), article-level metrics incorporate altmetrics, thus distinguishing the two types of measures; in other words, altmetrics are a type of article-level metric.
Source: Tannanbaum, G. (2013). Article-level metrics: A SPARC primer. SPARC. https://sparcopen.org/wp-content/uploads/2016/01/SPARC-ALM-Primer.pdf
Altmetrics are defined as metrics and qualitative data that can be used in addition to traditional impact factors that describe a work's impact See the Altmetrics LibGuide for specific types of altmetrics and what they capture.
What is a good impact number?
It depends. One shortcoming of bibliometrics is the numbers themselves are typically not normalized for field differences. Various disciplines measure their publications differently from one another:
Highest Ranking Journals:
Field: Library & Information Science
International Journal of Information Management
Established 1980. 5.063 (JIF), 4,885 total cites.
Field: Medicine, General & Internal
New England Journal of Medicine
Established 1828. 70.870 (JIF), 344,591 total cites.
Also, the length of time a publication has been in existence and the number of its articles cited influence its bibliometric indicators; publications that are long-established tend to be favored. In summary, a standardized ranking or scale that identifies "high impact" for all disciplines does not exist.
How can I determine impact factor or metric ranges?
1. Keep in mind that not all scholarly journals are included in indexes like Scopus or Web of Science; those that are not included typically do not have journal metrics.
2. Use tools such as Scopus Preview (CiteScore) and others like it to identify metrics; see the LibGuide, Find Impact Factors.
3. Comparisons. Comparing journals with others in a field or discipline is not an exact science; disciplines often overlap in what they study, and their boundaries can be unclear. However, comparing journal metrics within a discipline can be used for exploratory purposes. For this example, a geoscientist can get an idea of CiteScore ranges for journals in his or her field (with 31.07 being the highest to 3.71 being the lowest in 2018), and possibly mention this range in a review or for a P&T bid.
Citation Tracking & Analysis |
Citation counts are the number of times an article has been cited in other works. Although citation counts typically measure the degree to which a particular article is useful to other researchers in support of their work, these metrics are not a measure of the quality of a cited work, since a work can be cited for negative reasons (e.g., refutations or retractions).
Additionally, citation counts are highly dependent upon particular disciplines and the number of researchers in them. For example, more researchers work in neuroscience than in philosophy; as such, more papers are published in neuroscience than in philosophy, and neuroscience papers receive more citation counts than do philosophy papers.
Citation counts are found in discipline- and subject-specific indexes and databases. Note that citation counts are limited by their citation data source, i.e. since database coverage varies by content and discipline, a particular database will count only the citations contained within it, even if those citation counts exist externally of that database.
Further reading:
Bartoli, A. & Medvet, E. (2014). Bibliometric Evaluation of Researchers in the Internet Age. Information Society 30(5), 349-354.
Garfield, E. et al. (1978). Citation data as science indicators. In: Elkana, Y. et al. (eds): Toward a Metric of Science: The Advent of Science Indicators. John Wiley, New York: pp. 179-207.
Citation tracking, or citation analysis is an important tool used to trace scholarly research, measure impact, and inform tenure and funding decisions. The impact of an article is evaluated by counting the number of times other authors cite it in their work. Researchers do citation analysis for several reasons:
The output from citation studies is often the only way that non-specialists in governments and funding agencies, or even those in different scientific disciplines, can judge the importance of a piece of scientific research (Johns Hopkins University Library Guide, 2018)
Recommended Readings in the Library |
Criticisms Against Bibliometrics |
Experts in academia and publishing have the following criticisms against bibliometrics:
Misuse. The discussion of bibliometrics originated in the late 1950s by scientists who wanted to explore citation networks and shorten the time needed to find relevant articles, but now many organizations and institutions use bibliometrics to evaluate the productivity of researchers and scholars.
Its Use Among Disciplines. The use of bibliometrics has been viewed as a 'one size fits all' method of evaluation for various disciplines that do not rate their own work using the same means. Some bibliometrics apply to some disciplines, but not to others.
Methodology. The methods and formulas used in some bibliometrics are not transparent or available to the public. Experts have shown evidence that some bibliometric companies omitted publications by rival publishers from their lists, and assert some companies provided questionable explanations for changes made in their methodologies.
Various organizations and publications (e.g., PLoS) support professional statements such as The San Francisco Declaration of Research Assessment (DORA) that call for institutions and organizations to revisit the their use of bibliometrics for evaluative purposes. See the links below to gain a comprehensive understanding of these points: