Impact factor
The impact factor or journal impact factor of an academic journal is a type of journal ranking. Journals with higher impact factor values are considered more prestigious or important within their field. Impact factor is a scientometric index calculated by Clarivate's Web of Science. The impact factor of a journal reflects the yearly mean number of article citations published in the last two years. While frequently used by universities and funding bodies to decide on promotion and research proposals, it has been criticised for distorting good scientific practices.
History
The impact factor was devised by Eugene Garfield, the founder of the Institute for Scientific Information in Philadelphia. Impact factors began to be calculated yearly starting from 1975 for journals listed in the Journal Citation Reports. ISI was acquired by Thomson Scientific & Healthcare in 1992, and became known as Thomson ISI. In 2018, Thomson-Reuters spun off and sold ISI to Onex Corporation and Baring Private Equity Asia. They founded a new corporation, Clarivate, which continued publishing the JCR.Calculation
In any given year, the two-year journal impact factor is the ratio between the number of citations received for publications from the two preceding years, and the total number of "citable items" published in that journal during the two preceding years. For example, a 2024 impact factor would be calculated as follows:For example, Nature had an impact factor of 41.577 in 2017:
This means that, on average, its papers published in 2015 and 2016 received roughly 42 citations each in 2017. 2017 impact factors are reported in 2018; they cannot be calculated until all of the 2017 publications have been processed by the indexing agency.
The value of impact factor depends on how to define "citations" and "publications"; the latter are often referred to as "citable items". In current practice, both "citations" and "publications" are defined exclusively by ISI as follows: "Publications" are items that are classed as "article", "review" or "proceedings paper" in the Web of Science database; other items like editorials, corrections, notes, retractions and discussions are excluded. WoS is accessible to all registered users, who can independently verify the number of citable items for a given journal. In contrast, the number of citations is extracted not from the WoS database, but from a dedicated JCR database, which is not accessible to general readers. Hence, the commonly used "JCR Impact Factor" is a proprietary value, which is defined and calculated by ISI and can not be verified by external users.
New journals, which are indexed from their first published issue, will receive an impact factor after two years of indexing; in this case, the citations to the year prior to volume 1, and the number of articles published in the year prior to volume 1, are known zero values. Journals that are indexed starting with a volume other than the first volume will not get an impact factor until they have been indexed for three years. Occasionally, Journal Citation Reports assigns an impact factor to new journals with less than two years of indexing, based on partial citation data. The calculation always uses two complete and known years of item counts, but for new titles one of the known counts is zero. Annuals and other irregular publications sometimes publish no items in a particular year, affecting the count. The impact factor relates to a specific time period; it is possible to calculate it for any desired period. For example, the JCR also includes a five-year impact factor, which is calculated by dividing the number of citations to the journal in a given year by the number of articles published in that journal in the previous five years.
Use
While originally invented as a tool to help university librarians to decide which journals to purchase, the impact factor soon became used as a measure for judging academic success. This use of impact factors was summarised by Hoeffel in 1998:Impact Factor is not a perfect tool to measure the quality of articles but there is nothing better and it has the advantage of already being in existence and is, therefore, a good technique for scientific evaluation. Experience has shown that in each specialty the best journals are those in which it is most difficult to have an article accepted, and these are the journals that have a high impact factor. Most of these journals existed long before the impact factor was devised. The use of impact factor as a measure of quality is widespread because it fits well with the opinion we have in each field of the best journals in our specialty....In conclusion, prestigious journals publish papers of high level. Therefore, their impact factor is high, and not the contrary.
As impact factors are a journal-level metric, rather than an article- or individual-level metric, this use is controversial. Eugene Garfield, the inventor of the JIF agreed with Hoeffel, but warned about the "misuse in evaluating individuals" because there is "a wide variation from article to article within a single journal". Despite this warning, the use of the JIF has evolved, playing a key role in the process of assessing individual researchers, their job applications and their funding proposals. In 2007, The Journal of Cell Biology noted that:
Impact factor data... have a strong influence on the scientific community, affecting decisions on where to publish, whom to promote or hire, the success of grant applications, and even salary bonuses.
More targeted research has begun to provide firm evidence of how deeply the impact factor is embedded within formal and informal research assessment processes. A review in 2019 studied how often the JIF featured in documents related to the review, promotion, and tenure of scientists in US and Canadian universities. It concluded that 40% of universities focused on academic research specifically mentioned the JIF as part of such review, promotion, and tenure processes. A 2017 study of how researchers in the life sciences behave concluded that "everyday decision-making practices as highly governed by pressures to publish in high-impact journals." It has been argued that the deeply embedded nature of such indicators not only affects research assessment, but the more fundamental issue of what research is actually undertaken: "Given the current ways of evaluation and valuing research, risky, lengthy, and unorthodox project rarely take center stage."
Criticism
Numerous critiques have been made regarding the use of impact factors, both in terms of their statistical validity and also of their implications for how science is carried out and assessed. A 2007 study noted that the most fundamental flaw is that impact factors present the mean of data that are not normally distributed, and suggested that it would be more appropriate to present the median of these data. There is also a more general debate on the validity of the impact factor as a measure of journal importance and the effect of policies that editors may adopt to boost their impact factor. Other criticism focuses on the effect of the impact factor on the behavior of scholars, editors and other stakeholders. Criticism of impact factors also extends to its impact on researcher behavior. While the emphasis on high-impact journals may lead to strategic publishing practices that prioritize journal prestige over the quality and relevance of research, it's important to acknowledge the "privilege paradox". Younger researchers, particularly those from under-represented regions, often lack the established reputation or networks to secure recognition outside of these metrics. This can lead to a narrow focus on publishing in top-tier journals, potentially compromising the diversity of research topics and methodologies. Further criticisms argue that emphasis on impact factor results from the negative influence of neoliberal politics on academia. Some of these arguments demand not just replacement of the impact factor with more sophisticated metrics but also discussion on the social value of research assessment and the growing precariousness of scientific careers in higher education.Inapplicability of impact factor to individuals and between-discipline differences
It has been stated that impact factors in particular and citation analysis in general are affected by field-dependent factors which invalidate comparisons not only across disciplines but even within different fields of research of one discipline. The percentage of total citations occurring in the first two years after publication also varies highly among disciplines from 1–3% in the mathematical and physical sciences to 5–8% in the biological sciences. Thus impact factors cannot be used to compare journals across disciplines.Impact factors are sometimes used to evaluate not only the journals but the papers therein, thereby devaluing papers in certain subjects. In 2004, the Higher Education Funding Council for England was urged by the House of Commons Science and Technology Select Committee to remind Research Assessment Exercise panels that they are obliged to assess the quality of the content of individual articles, not the reputation of the journal in which they are published. Other studies have repeatedly stated that impact factor is a metric for journals and should not be used to assess individual researchers or institutions.
Questionable editorial policies that affect the impact factor
Because impact factor is commonly accepted as a proxy for research quality, some journals adopt editorial policies and practices, some acceptable and some of dubious purpose, to increase their impact factor. For example, journals may publish a larger percentage of review articles which generally are cited more than research reports. Research undertaken in 2020 on dentistry journals concluded that the publication of "systematic reviews have significant effect on the Journal Impact Factor... while papers publishing clinical trials bear no influence on this factor. Greater yearly average of published papers... means a higher impact factor."Journals may also attempt to limit the number of "citable items"—i.e., the denominator of the impact factor equation—either by declining to publish articles that are unlikely to be cited or by altering articles. As a result of negotiations over whether items are "citable", impact factor variations of more than 300% have been observed. Items considered to be uncitable—and thus are not incorporated in impact factor calculations—can, if cited, still enter into the numerator part of the equation despite the ease with which such citations could be excluded. This effect is hard to evaluate, for the distinction between editorial comment and short original articles is not always obvious. For example, letters to the editor may be part of either class.
Another less insidious tactic journals employ is to publish a large portion of its papers, or at least the papers expected to be highly cited, early in the calendar year. This gives those papers more time to gather citations. Several methods, not necessarily with nefarious intent, exist for a journal to cite articles in the same journal which will increase the journal's impact factor.
Beyond editorial policies that may skew the impact factor, journals can take overt steps to game the system. For example, in 2007, the specialist journal Folia Phoniatrica et Logopaedica, with an impact factor of 0.66, published an editorial that cited all its articles from 2005 to 2006 in a protest against the "absurd scientific situation in some countries" related to use of the impact factor. The large number of citations meant that the impact factor for that journal increased to 1.44. As a result of the increase, the journal was not included in the 2008 and 2009 Journal Citation Reports.
Coercive citation is a practice in which an editor forces an author to add extraneous citations to an article before the journal will agree to publish it, in order to inflate the journal's impact factor. A survey published in 2012 indicates that coercive citation has been experienced by one in five researchers working in economics, sociology, psychology, and multiple business disciplines, and it is more common in business and in journals with a lower impact factor. Editors of leading business journals banded together to disavow the practice. However, cases of coercive citation have occasionally been reported for other disciplines.