Assessment of research impact
"Research impact" typically is defined as the extent to which scholarly research is read, discussed, and used, both inside and outside academe. Measuring impact is important for promotion and tenure, determining research quality, and to assess potential for grant funding.
Journal-Level Metrics give a value (a score or a rank) to a journal. There are a number of bibliometric indicators focusing on measuring impact of scholarly journals, such as the Journal Impact Factor (JIF), Eigenfactor, CiteScore, SCImago Journal Rank, and Source Normalization impact per paper (SNIP). Depending on the discipline, other journal evaluation criteria can include publishing information, such as rate of acceptance, circulation, and where the journal is indexed, see "Other Indicators" tab.
The Journal Impact Factor (JIF) is a measure of the frequency with which the "average article" published in a given scholarly journal has been cited in a particular year or period and is often used to measure or describe the importance of a particular journal to its field.
See Journal Citation Reports: A Primer on the JCR and Journal Impact Factor
What is the calculation for the Journal impact Factor?
How can I find the Journal Impact Factor for individual journals?
Journal Impact Factors can be found independently of JCR. For major publishers, Journal Impact Factors can usually be found on the journal's homepage or publisher's site (see example). Often, this information can be found in the section "About the Journal." The Library's subscription to Web of Science DOES NOT include access to JCR.
What is the underlying data?
Only journals that are selected for the Web of Science Science Citation Index Expanded (SCIE) and the Social Sciences Citation Index (SSCI) will be listed in JCR with Journal Impact Factors. You can also check Ulrichsweb to see if a journal has been assigned a Journal Impact Factor; "Journal Citation Reports" will be listed under "Key Features."
Warning: Disreputable journals may falsely display an Impact Factor on their websites. If you are unsure about a publisher, confirm with one of the resources above to see if the journal is included in Journal Citation Reports.
What are the limitations of the Journal Impact Factor?
Clarviate Anayltics states under the section “Misuse of the Journal Impact Factor”:
The JIF was originally conceived as an aid for libraries in deciding which journals to purchase. JIF is a journal-level metric, so it’s not appropriate to use as a proxy measure for any other entity. The fact of a journal being highly cited really tells us little or nothing about the specific authors who have published in that journal. It is more appropriate to use Web of Science or InCites to measure the output and influence of authors, institutions, regions, or documents.
(Journal Citation Reports: A Primer on the JCR and Journal Impact Factor, p.6)
The Eigenfactor, developed by Jevin West and Carl Bergstrom at the University of Washington, is intended to reflect the influence and prestige of journals. Citations from highly ranked journals are weighed to make a larger contribution to the score (i.e. the value of a single publication in a major journal vs. many publications in minor journals).
Where can I find the Eigenfactor?
You can find the Eigenfacor score at the website http://www.eigenfactor.org.
What is the Calculation for the Eigenfactor?
The calculation is based on citations made in a given year to papers published in the prior five years: The Eigenfactor of journal J in year X is defined as the percentage of weighted citations received by J in X to any item published in (X-1), (X-2), (X-3), (X-4), or (X-5), out of the total citations received by all journals in the dataset.
What is the underlying data?
Like the journal impact factor, the Eigenfactor is based on data held in Thomson Reuters' Journal Citation Reports.
CiteScore is an Elsevier product that calculates the average number of citations received in a calendar year by all items published in that journal in the preceding three years. CiteScore metrics are available for all serial titles indexed in the Scopus title list that have enough data available to calculate the metric. See About CiteScore and it's deliberative metrics.
Where can I find the CiteScore?
Elsevier's Journal Metrics site tracks the Citescore, SNIP and SJR. You can refine by subject title and year.
What is the Calculation for CiteScore?
The number of citations received by a journal in one year to documents published in the three previous years, divided by the number of documents indexed in Scopus published in those same three years.
Difference from Journal Impact Factor:
Image Source: http://libguides.lb.polyu.edu.hk/journalimpact
What is the underlying data?
Scopus indexes nearly 22,000 peer-reviewed journals, trade publications, book series, conference papers and patents in the scientific, technical, and medical and social sciences (including arts and humanities).
Commentary on CiteScore
The Measure of All Things: Some Notes on CiteScore (The Scholarly Kitchen, 1/11/17). An evaluation posted by independent management consultant Joseph Esposito.
SCImago Journal & Country Ranking. The SCImago journal rank (SJR) expresses the average number of weighted citations received in the selected year by the documents published in the selected journal in the three previous years. Much like the Eigenfactor, a citation from an important journal will count more than one coming from a less important journal.
Where can I find the SJR?
What is the calculation for the SJR?
The SJR of journal J in year X is the number of weighted citations received by J in X to any item published in J in (X-1), (X-2) or (X-3), divided by the total number of articles and reviews published in (X-1), (X-2) or (X-3). For more details see Understanding indicators, tables and charts.
What is the underlying data?
SJR is based on citation data of the more than 20,000 peer-reviewed journals indexed by Scopus from 1996 onwards.
SNIP (Source Normalized Impact per Paper)
SNIP measures a source’s contextual citation impact by weighting citations based on the total number of citations in a subject field. It helps you make a direct comparison of sources in different subject fields.
Calculation for the SNIP Indicator
SNIP is the ratio of a source's average citation count per paper and the citation potential of its subject field. See the Methodology.
Where do I find SNIP?
What is the underlying data?
SNIP is based on citation data of the more than 20,000 peer-reviewed journals indexed by Scopus from 1996 onwards.
Journal h-index is one measure of the quality of a journal and can be calculated using data from Web of Science, Scopus or Google Scholar. As with the impact factor, journal h-index does not take into account differing citation practices of fields (unlike the weighted SJR and SNIP) and so is best used to compare journals within a field. The h-index publication window can be selected to best suit the citation practices of a discipline.
Google Scholar Metrics. Go to Google Scholar and select Metrics from left hand menu. You can find Classic papers, Top Publications and search for an individual journal.
Depending on the discipline, other journal evaluation criteria can include publishing information such as rate of acceptance, peer review process, circulation, and where the journal is indexed.
See Choosing a Journal to find links to these resources.
Author-level metrics are bibliographic measurements that capture the productivity and cumulative impact of the output of individual authors, researchers, and scholars. The h-index, proposed by Jorge E. Hirsch (2005), is a flexible, widely-used citation metric applicable to any set of citable documents. Various databases (e.g., Web of Science) can calculate an h-index, but are likely to produce a different h for the same scholar, since databases vary in content and coverage.
What is the h-index?
How is the h-index used?
What is the underlying data for the h-index?
How is the h-index different from other citation metrics?
Hirsch, J. E. (2005). An index to quantify an individual's scientific research output. PNAS 102(46), 16569-16572.
Article-level metrics (ALMs) measure the use and impact of individual scholarly articles. The metrics include traditional impact measures (e.g., citation counts), and more contemporary measures (e.g., number of downloads). While number of downloads is also known as an alternative metric ("altmetric,” or usage data), article-level metrics incorporate altmetrics, thus distinguishing the two types of measures; in other words, altmetrics are a type of article-level metric.
In the context of article-level metrics, citation counts are the number of times an article has been cited in other works. While citation counts typically measure the degree to which a particular article is useful to other researchers in support of their work, these metrics are not a measure of the quality of a cited work, since a work can be cited for negative reasons (e.g., refutations). Additionally, citation counts are highly dependent upon particular disciplines and the number of researchers in them. For example, more researchers work in neuroscience than in philosophy; as such, more papers are published in neuroscience than in philosophy, and thus neuroscience papers receive more citation counts than do philosophy papers.
Citation counts are found in discipline- and subject-specific indexes and databases. Note that citation counts are limited by their citation data source, i.e. since database coverage varies by content and discipline, a particular database will count only the citations contained within it, even if those citation counts exist externally of that database. Prominent citation indexes and databases include Web of Science, Google Scholar, and Scopus. See the FAU Libraries’ Scholarly Publishing web page (and "Disciplinary Indexes" specifically) for a comprehensive list of disciplinary indexes and databases in which citation counts can be located.
An example of citation counts in Web of Science:
...and in Google Scholar:
Meanwhile, Usage Count as one example of an alternative metric ("altmetric"):
Bartoli, A. & Medvet, E. (2014). Bibliometric Evaluation of Researchers in the Internet Age. Information Society 30(5), 349-354.
Garfield, E. et al. (1978). Citation data as science indicators. In: Elkana, Y. et al. (eds): Toward a Metric of Science: The Advent of Science Indicators. John Wiley, New York: pp. 179-207.
Citation tracking, or citation analysis is an important tool used to trace scholarly research, measure impact, and inform tenure and funding decisions. The impact of an article is evaluated by counting the number of times other authors cite it in their work. Researchers do citation analysis for several reasons:
The output from citation studies is often the only way that non-specialists in governments and funding agencies, or even those in different scientific disciplines, can judge the importance of a piece of scientific research (Johns Hopkins University Library Guide, 2018)
The flow, dissemination, and interaction of online research can now be tracked and analyzed beyond what was traditionally accepted as the signifiers of prestige and impact.
Altmetrics have the potential to answer these questions:
Examples of measurements:
The altscore is generated by Altmetric.com. The colorful doughnut is integrated into many publisher sites and other search platforms. Altmetrics has some free tools for researchers such as the Altmetric Booklet.
Plum Analytics (Elsevier) like Altmetrics.com, tracks altmetrics. Their visual score is the "Plum Print" and is integrated into platforms that have agreements with Elsevier (i.e. CINAHL, ScienceDirect).
The Libraries have a subscription to PlumX, a dashboard that allows FAU affiliates to track usage of their scholarly works and run reports. See PlumX at FAU for more information.
Impactstory, funded by the National Science Foundation, is an open-source website that helps researchers explore and share the online impact of their research. See an example profile.
ALM Reports allows you to view article-level metrics for any set of PLOS articles.
Kudos is a free resource that tracks your alternative metrics and provides tools to better promote your work.
Register NOW and get your unique ORCID identifier in 30 seconds.
ORCID (Open Researcher and Contributer ID) is a free and open registry of unique identifiers for researchers and scholars. Unlike Google Scholar, Academia.edu, or ResearchGate, ORCID is NOT primarily a research profile system (although it can serve that purpose). In fact, having and using an ORCID iD will facilitate the maintenance of researcher profiles that you already have.
Signing up for an ORCID identifier and using it in your research workflows will ensure that you receive credit for your work, simplify manuscript submissions and improve author search results. ORCID is an increasingly important part of the global research infrastructure, with many funding bodies and publishers now making it a requirement.
Registered? Now, make the most out of your ORCID iD:
See more instructional videos on the ORCID Vimeo channel. .
ResearchGate is an online research community in which you can share updates about your research and publications, and obtain citation counts and your h-index.
To create a ResearchGate profile:
If you have published in a Scopus (Elsevier) indexed journal, you have been assigned a SCOPUS ID. To find your Scopus ID, go to https://www.scopus.com/freelookup/form/author.uri. You can link your SCOPUS ID to your ORCID accout.
Your Google Scholar profile includes a list of articles you have placed into the profile, with "cited by" links for each article. Google Scholar displays a graph showing citation activity, and calculates your total number of citations, as well as your h-index and i10-index.
To create your Google Scholar profile: