Content » Vol 43, Issue 6

Special report

Bibliometric indicators and core journals in physical and rehabilitation medicine

Franco Franchignoni, MD1 and Susana Muñoz Lasa, MD2

From the 1Unit of Occupational Rehabilitation and Ergonomics, “Salvatore Maugeri” Foundation, Clinica del Lavoro e della Riabilitazione, IRCCS, Rehabilitation Institute of Veruno (NO), Italy and 2Department Medicina Fisica y Rehabilitacion, Universidad Complutense, Madrid, Spain

Background and objective: The concept of the “standing” of scientific journals (in terms of influence, prestige, popularity, etc.) is multi-dimensional and cannot be captured adequately by a single indicator. The aim of this report is to compare and comment on different bibliometric indicators related to some leading journals in rehabilitation, in order to provide further insights regarding their practical usefulness for Physical and Rehabilitation Medicine.

Discussion: The commonly used Journal Impact Factor and the new SCImago Journal Rank indicator are measures of average “impact per paper”. Other new measures show potentially useful complementarities with them and warrant further attention. For example, the Eigenfactor score represents a measure of total “citation impact” and seems sufficiently to express the “importance” of a journal. In fact, the information conveyed by the Eigenfactor score corresponds to a general consensus of journal status in Physical and Rehabilitation Medicine, as expressed by the European Consensus Committee on “International Rehabilitation Journals” and captured by a survey among European Physical and Rehabilitation Medicine researchers.

Key words: bibliometric analysis; impact factor; Eigenfactor Score; Physical and Rehabilitation Medicine.

J Rehabil Med 2011; 43: 471–476

Correspondence address: Franco Franchignoni, Fondazione Salvatore Maugeri, Clinica del Lavoro e della Riabilitazione, IRCCS, Via Revislate 13, I-28010 Veruno (NO), Italy. E-mail: franco.franchignoni@fsm.it

INTdRODUCTION

There is no one simple bibliometric indicator that can express the “standing” (in terms of influence, prestige, popularity, etc.) of a scientific journal.

The Journal Impact Factor (JIF) is perhaps the best-known bibliometric measure of scientific impact, due to its simple and intuitive definition. It reflects the frequency with which the “average article” in a journal has been cited in a particular period. It is produced by the Institute for Scientific Information (ISI; currently Thomson Reuters, a US private commercial enterprise) and is inserted in its Journal Citation Reports (JCR), now available at the portal Web of Science (WoS). Until recently, WoS was the only citation database for conducting extensive citation searching and bibliometric analysis, widely covering scholarly scientific and technical literature.

Although the JCR itself warns that the JIF should not be relied on as the sole source of information when comparing and evaluating publications (and particularly when comparing citation counts of different disciplines), this index has been frequently used as an exclusive proxy for the relative importance of a journal (with journals with higher impact factors deemed to be more important than those with lower ones). However, many drawbacks of the JIF have been extensively discussed (1, 2), and it is clear that at present it is a far-from-perfect measure of scientific impact (3). Among others, it has been pointed out that the JIF counts the number of citations received, but ignores any information about the sources of those citations.

In the past few years several new databases or tools that provide citation searching capabilities have been developed (4, 5). This has created competitiveness, which is productive for users, and some of the tools are sufficiently comprehensive and/or multidisciplinary in nature to pose a direct challenge to the dominance of the WoS. In parallel, new measures of the scientific performance of journals have been proposed and discussed (6–8). However, at present it is not clear which measures best express the various aspects and interpretations of concepts such as “impact”, “influence”, “prestige”, etc. (9).

The aim of this report is to compare and comment on different (and often new) bibliometric indicators related to some leading journals in rehabilitation, in order to provide further insights regarding their practical usefulness for Physical and Rehabilitation Medicine (PRM).

METHODS

Selection of journals

For the analyses the first 16 journals in the category Rehabilitation of the JCR – Science Edition 2009 (the latest available on the web) were taken into account. Only one journal (Supportive Care in Cancer) was preliminarily discarded, as it was judged by expert opinion not to belong to this category (incidentally, it is indexed in Scopus/SCImago under the category “Oncology”). Two independent reviewers (FF, SML) extracted data regarding the 15 remaining journals (see Table I). Data were imported into an Excel spreadsheet for further analysis. Any disagreement was resolved by iteration and consensus.

Bibliometric indicators

JIF and 5-Year JIF (5Y-JIF). The JIF represents “the average number of times articles from the journal published in the past 2 years have been cited in the JCR year” (1). For example, the 2009 JIF of journal X is calculated by dividing the number of 2009 citations (in journals indexed by Thomson ISI) of all articles published by journal X in 2007 and 2008 by the total number of articles deemed to be “citable” by Thomson ISI that were published in journal X in 2007 and 2008.

Similarly, the 5Y-JIF is “the average number of times articles from the journal published in the past 5 years have been cited in the JCR year”.

The 2009 JIF and 5Y-JIF are reported here.

Eigenfactor Score (EFS) and Article Influence Score (AIS). According to the creators of the EFS, this indicator should give a measure of that journal’s importance within the network of academic citations (7). The EFS calculation is based on the number of times articles published in the journal in the past 5 years have been cited in a given year, but it also considers (using an iterative ranking scheme) which journals have contributed these citations, so that highly-cited journals will influence the network more than lesser-cited journals (Fig. 1). Each contribution is corrected for differences in citation patterns across disciplines and journals, and journal self-citations are removed (see www.eigenfactor.org/methods.htm). The basic idea behind the Eigenfactor metrics is that the citations published by scholarly journals form a vast network linking the collective research output. In Fig. 1, each node in the network represents an individual journal, and each arrow represents citations from one journal to another. The links are weighted and directed: strong weights represent large numbers of citations, and the direction of the arrow indicates the direction of the citations. The Thompson ISI JCR is its reference database.

1453Fig1

Fig. 1. A simplified citation network, analysed with an iterative ranking scheme. Arrows indicate citations from each of the 4 journals (A, B, C, and D) to one other. The method defines an iterative algorithm that computes values of centrality until a steady-state solution is reached. The importance (prestige) of the nodes (journals) is redistributed at each iteration in terms of their connections with other nodes. At the end, larger circles represent more important journals.

The AIS is calculated by dividing a journal’s EFS by the number of articles published by the journal (normalized as a fraction of all articles in all publications). As such, this index is an indicator that allows a per-article comparison based on the Eigenfactor approach, and aims to determine the average influence of a journal’s articles over the first 5 years after publication. Thus, the AIS is comparable to the 5-Y JIF, except that the citations are weighted to reflect the “influence” of the citing journals.

The 2009 EFS and AIS, available in the online version of the JCR, were taken into account.

SCImago Journal Rank indicator (SJR). According to its creators, the SJR aims to measure the current “average prestige per paper” of journals (6). The SJR is calculated through an iteration process (similar to that used to calculate citation PageRank and EFS), which computes the “importance” gained by the other journals included in the network of journals, by the citations during the past 3 years of all articles of the specific journal published in the past 3 years, divided by the total number of articles of the specific journal during the 3-year period in question. The amount of “prestige” of each journal transferred to another journal in the network is computed by considering the percentage of citations of the former journal that are made in reference to articles of the latter journal (see www.scimagojr.com/SCImagoJournalRank.pdf) (6). The SJR is based on Scopus, at present the world’s largest electronic database of abstracts and citations for peer-reviewed literature (4, 5). Overall, the SJR is a size-independent hybrid metric that measures the current average performance per paper of journals (like the JIF, 5Y-JIF and AIS) using an approach based on eigenvector centrality (such as the EFS) (10, 11). The last available SJR (2008) is considered in this report.

The h-index. This was proposed by Hirsch in 2005 (8) to rank authors according to their rank-ordered citation distributions, but it was quickly extended to scientific journals (12) as a useful supplement to evaluate their impact (see http://en.wikipedia.org/wiki/H-index). Journals (or researchers) have an h-index of n when they have published n papers, each of which has been cited at least n times to date. The h-index aims to provide a single-number innovative metric, combining the effect of quantity (number of publications) and impact (citation rate).

Two different h-indices for journals are reported here. The first (h-index1) is the one produced by SJR and expresses “the journal’s number of articles (n) that have received at least n citations” (it is a “life-time” index, calculated taking into account the whole journal production, since 1996). The second (h-index2) is the h-index that has been produced by WoS, limiting the timeframe to 5 years (2003–2007), in order to mitigate the possible influence of journal name changes and to ensure comparability of data across journals with a different lifespan.

RESULTS

Table I reports the correlation matrix of the different indicators (Spearman’s correlation coefficient). Some indicators show a moderate to high degree of correlation.

Table I. Correlation coefficients (Spearman’s rho) between each pair of journal indicators

Journal indicator

JIF

5Y-JIF

EFS

AIS

SJR

h-index1

h-index2

JIF

5Y-JIF

0.71

EFS

–0.37

–0.21

AIS

0.54

0.92

–0.36

SJR

0.45

0.87

0.01

0.85

h-index1

–0.05

–0.16

0.76

–0.34

0.09

h-index2

0.10

0.21

0.75

0.06

0.41

0.72

JIF: JCR Journal Impact Factor; 5Y-JIF: 5-year Journal Impact Factor; EFS: Eigenfactor score; AIS: Article Influence score; SJR: SCImago Journal rank indicator; h-index1: h-index calculated by SCImago; h-index2: h-index calculated by Web of Science with a 5-year window.

The values and rankings of the 7 selected journal indicators related to the 15 top journals in the JCR Rehabilitation category are shown in Table II.

Table II. Values (column ranking) of the 15 top journals in the JCR Rehabilitation category, in alphabetical order (the official US National Library of Medicine-catalogue abbreviations are used)

Journal

JIF

5Y-JIF

EFS

AIS

SJR

h-index1

h-index2

Am J Phys Med Rehabil

1.556 (13)

2.014 (13)

0.00728 (6)

0.560 (13)

0.117 (9)

44 (7)

26 (8)

Arch Phys Med Rehabil

2.184 (6)

2.761 (5)

0.02677 (1)

0.784 (7)

0.155 (5)

84 (1)

47 (1)

Aust J Physiother

1.709 (12)

2.709 (7)

0.00275 (14)

0.990 (2)

0.139 (6)

26 (14)

20 (14)

Brain Inj

1.533 (15)

1.925 (14)

0.00608 (7)

0.458 (14)

0.089 (15)

47 (4)

25 (10)

Clin Rehabil

1.767 (11)

2.546 (9)

0.00805 (4)

0.723 (8)

0.121 (8)

46 (5)

31 (4)

Disabil Rehabil

1.555 (14)

2.056 (12)

0.01078 (2)

0.564 (12)

0.099 (13)

43 (8)

33 (3)

IEEE T Neural Syst Rehabil Eng

2.417 (3)

3.299 (3)

0.00579 (8)

0.891 (4)

0.248 (1)

51 (3)

39 (2)

J Electromyogr Kinesiol

1.995 (9)

2.373 (11)

0.00540 (9)

0.654 (10)

0.102 (11)

42 (9)

28 (7)

J Head Trauma Rehabil

2.391 (4)

3.639 (2)

0.00420 (12)

0.926 (3)

0.229 (2)

41 (10)

26 (8)

J Neuroeng Rehabil

2.115 (7)

0.00257 (15)

0.112 (10)

16 (15)

11 (15)

J Orthop Sports Phys Ther

2.482 (2)

2.434 (10)

0.00499 (10)

0.625 (11)

0.094 (14)

46 (5)

25 (10)

J Rehabil Med

1.882 (10)

3.027 (4)

0.00778 (5)

0.849 (5)

0.129 (7)

39 (11)

25 (10)

Man Ther

2.319 (5)

2.686 (9)

0.00318 (13)

0.690 (9)

0.100 (12)

30 (13)

23 (13)

Neurorehabil Neural Repair

5.398 (1)

4.836 (1)

0.00486 (11)

1.096 (1)

0.228 (3)

34 (12)

29 (6)

Phys Ther

2.082 (8)

2.742 (6)

0.00927 (3)

0.802(6)

0.175 (4)

63 (2)

31 (4)

JCR: Journal Citation Reports; JIF: JCR Journal Impact Factor; 5Y-JIF: 5-year Journal Impact Factor; EFS: Eigenfactor score; AIS: Article Influence Score; SJR: SCImago Journal rank indicator; h-index1: h-index calculated by SCImago; h-index2: h-index calculated by Web of Science with a 5-year window.

DISCUSSION

In the biomedical field, numerous efforts have been made to refine the mode of information retrieval and augment citation analysis (4, 5).

First, a new research trend is clearly emerging, aimed at developing impact metrics that consider not only the raw number of citations received by a scientific agent, but also the “importance” of the journals that issued them (assigning weights to the bibliographic citations) (6, 7). In practice, the scientific literature is considered as a network of scholarly articles, connected by citations. The “importance” of each journal is computed recursively, by the number of citations received from other “important” journals. Such procedure, used for calculating EFS, AIS and SJR, belongs to the group of eigenvector centrality methods and is now feasible thanks to powerful computational systems similar to the PageRank algorithm, developed by the creators of Google (6, 7).

Secondly, different citation-based metrics are used, some to compare the performance of journals, others to assess how often researchers are cited and then rank their scientific productivity. As an example, the h-index was proposed by Hirsch (8) to rank authors according to their rank-ordered citation distributions, and only later was it extended to scientific journals (12). We think that the characteristics of this index make it more suitable for the former aim.

Many drawbacks of the h-index have been discussed in recent years (13), for example: (i) once an article belongs to the h-defining class, it is completely unimportant whether or not these papers continue to be cited; (ii) h-index is highly dependent upon the activity length and can only rise; (iii) it is influenced by the journal size; (iv) there remains a variation in citation related to different subject areas. Overall, h-index seems to oversimplify the complexity of this field and lead to misunderstanding (3). With the aim of compensating these weaknesses (13), a number of variants have been proposed, but at present none has gained currency.

Overall, there is a need better to understand some basic characteristics of these new indicators, whereas a detailed discussion of their technical aspects is beyond the scope of this report. We will briefly discuss here the most interesting correlations between journal indicators, and journal ranking according to these indicators.

Correlations between indicators

Correlations should be interpreted with special care, particularly when few measures with a restricted range of variability are analysed. The JIF showed a good correlation (> 0.70) with 5Y-JIF and a fair-to-moderate correlation with AIS and SJR. In addition, the following good correlations have been found: 5Y-JIF with AIS and SJR; EFS with the two h-indexes; AIS with SJR; and between the two h-indexes. All other correlations were low (< 0.45).

These findings were expected (14, 15), because JIF, 5Y-JIF, AIS and SJR are measures of average citation impact per paper (and should, all else being equal, be independent of journal size), while EFS is a measure of total citation impact (which scales with the size of the journal). On the other hand, the high correlation between EFS and h-indices is reasonable (16), even if EFS and an h-index are not interchangeable (17). Proponents of the h-indexes claim that this indicator reflects both the number of publications and the number of citations per publication. Conversely, EFS seems to express composite information (about total citation impact) that is complementary to that of JIF, 5Y-JIF, SJR or AIS (about average citation impact per paper) (18).

For the above reasons, the good correlations of the 5Y-JIF with the SJR and AIS, and between the SJR and AIS, are not surprising. All these indicators divide citations of a journal during a specific time period by the number of articles published by the journal in the same period. The larger window of analysis of the 5Y-JIF (5 years) in comparison with the JIF (2 years) is probably responsible for its higher correlation with the SJR (3 years) and the AIS (5 years). The JIF is a well-known measure, but with several drawbacks: e.g. very skewed distribution of article citedness (usually, 15% of a journal’s articles collect 50% of the citations); heavy dependence on peculiarities and practices of publication in different subject areas (field effect), and on how many journals are indexed in the subject; high correlation with mean time from submission to publication and citation pattern (including the speed with which authors begin citing articles) (1). Moreover, the JIF is prone to different kinds of manipulation (2, 19). The 5Y-JIF does not solve the majority of these problems: enlarging the target window could be positive for some disciplines (including PRM), but does not change the essence of the indicator. Conversely, the main advantages of the SJR over the JIF (and the 5Y-JIF) should lie in its methodology of score estimation: in particular, the weight attributed to citations (depending on the “importance” of the citing journal), but perhaps also the way of handling self-citations (which is not included) (10). A study by Bollen et al. (11), using principal component analysis to assess 39 different impact measures, grouped the SJR and the JIF together as measures of “citation normalized per document” and “popularity” (referred as the number of citations, equally counted, without consideration of the origin) (9), in spite of the fact that the SJR should also explicitly “transfer prestige from a journal to another one” (see www.scimagojr.com/SCImagoJournalRank.pdf). A similar finding was reported by Leydesdorff (3). However, the Scopus database (SJR) includes a substantially larger collection of journals than the WoS (JIF, EFS, etc.) and PubMed, originating from more countries and published in a greater variety of languages (4, 5). Thus, in this regard it could be assumed that the SJR provides a more comprehensive estimation of the scientific impact of journals at the worldwide level than the WoS (and JIF), particularly those published in non-English languages. In addition, SJR is an open access resource, while WoS (JIF) requires a paid subscription (10).

Journal ranking

Journal ranking is very important and is closely watched by researchers, editors, publishers, librarians and others. In the world of journal publishing (and in many other contexts) being ranked number 1 or 4 according to a certain index is critical, and dropping even a few rank positions may be perceived by some as the difference between winning the gold medal in the Olympic games and being overlooked by all (20).

In Table II it is clear that there are substantial ranking differences (more than 5 rank positions) between indicators, and it is impossible to define the “standing” of a scientific journal using a single indicator. The top ranking is achieved by Neurorehabilitation and Neural Repair according to the JIF, 5Y-JIF and AIS; by Archives of Physical Medicine & Rehabilitation according to the EFS and the two h-indexes; and by IEEE Transactions on Neural Systems and Rehabilitation Engineering according to the SJR. In spite of being in the same “Rehabilitation” category of the JCR, the 3 top journals have different missions, areas of interest and audience, and probably different publication and citation dynamics. Each of these factors can influence 1 or more indicators.

Examining these ranking more closely, the use of EFS ranking as an indicator of journal “influence” corresponds with the proposals of the Consensus Committee on “International Rehabilitation Journals” of the European Society of Physical and Rehabilitation Medicine (ESPRM). In fact, the Committee suggested 5 out of the first 6 journals listed in Table II in order of their EFS ranking as first choice for publication by European PRM researchers (in alphabetical order: American Journal of Physical Medicine & Rehabilitation, Archives of Physical Medicine & Rehabilitation, Clinical Rehabilitation, Disability and Rehabilitation, and Journal of Rehabilitation Medicine) (21). The Committee’s choice clearly acknowledges the top standing of these journals. The sixth journal in EFS ranking (Physical Therapy) was not taken into account by the Committee, being a leading journal, but of a different (although allied) discipline. An external validation of the Committee’s decisions came from a bibliometric survey (21) showing that the 5 top journals selected were in the first 5 positions in terms of recent publications by 10 randomly-chosen members of the European Academy of PRM, coming from 9 different European countries.

These findings appear to be an indirect confirmation of the importance of those journals and of the ability of the EFS to capture their citation interdependence and global “influence”, as a calibrated mix of size (i.e. how many citations a journal receives, irrespective of who made the citation) and weighted impact (i.e. giving greater importance to citations appearing in highly cited journals).

Summary

In summary, according to the recent literature (9, 22), the JIF, 5Y-JIF and SJR (and to a lesser extent the AIS) should be considered as a metric of average citation impact per paper (22). Although further validation is warranted, the SJR seems to represent a serious alternative to the JIF (6, 10), because it uses innovative impact metrics that take into account not only the number of citations, but also assign weight to citations based on the “importance” of the journals that issued them.

On the other hand, the EFS measures the total citation impact, and appears to be able satisfactorily to reflect the global journal “influence” (23). Conversely, it has been demonstrated that the JIF does not correlate with researchers’ perceptions of the relative importance of journals as media for communicating important biomedical research results (24).

From a final user perspective, many other indicators do not appear to add much, are difficult to understand, and are closely connected with technical choices (e.g. depth and length of coverage in the underlying databases, different weighting of their software, etc.). Their validation and refinement are still in progress. In addition, all bibliometric indicators are dependent on the citation database used (WoS, Scopus, etc.) (4, 5); their transparency and traceability needs to be enhanced; and justification for some mathematical procedures for calculating them (including normalization methods, and statistics based on arithmetic averages of a highly skewed distribution) is still missing (25).

A limitation of this report is that we took into account only the first 15 journals in the Rehabilitation category of the JCR – Science Edition 2009 (WoS), ranked by the JIF. These journals (Table II) are also all indexed in Scopus, but 6 in the category “Rehabilitation” (that contains 90 items), 4 in “Orthopedics and Sports Medicine”, 2 in “Neurology – clinical”, and 3 in “Health Professions – miscellaneous”. This shows that delineating a science field is a complex problem (26, 27) and the JCR subject categories lack an analytical base and are not regularly updated, whereas alternative and more sophisticated classification schemes are available nowadays (28, 29). Moreover, we do not have a universally accepted, golden standard in this field to calibrate any new measures, and these measures were calculated for different citation datasets, so that it may be difficult to distinguish the true characteristics of an indicator from the peculiarities of the dataset from which it was calculated.

Conclusion

In conclusion, the concept of scientific impact is multi-dimensional and cannot be adequately captured by a single indicator. Probably, for analysing the scientific relevance of journals a good choice could be a combination of just a few indicators, able to reflect both the average impact of the papers and the size component.

In fact, the measures of average “impact per paper” (such as the JIF and the SJR) are not positioned at the core of the construct “overall standing of a scientific journal” (11, 22). Other new measures show potentially useful complementarities with them and merit further attention. For example, the measures of “total citation impact” of a journal (weighted according to the importance of the citing journals) (such as the EFS, or simply “total cites”) seem to better express the ‘influence’ (sometimes referred to as “prestige”) of a journal (17, 22, 23, 30), and to better correspond with a general understanding of journal status, as captured by field experts.

We have tried to add new insights to the ongoing discussion regarding the suitability of bibliometric indicators, particularly when applied in PRM. Further studies are needed to confirm the generalizability of these findings to other fields, and we hope that this report may lead to further debate on this topic of growing interest.

There are 3 qualifications to this discussion, which should be taken into account. First, these bibliometric indicators apply to the journals and not to the individual papers (or authors). Better ways to analyse the “performance” of a paper exist, including to tally the number of citations that the paper itself received (25, 31), through WoS, Scopus or Google scholar (32). Secondly, citation data are not the only way to quantitatively measure the values that a journal or a paper offers: for example, direct measures of readership and usage (i.e. coming from usage log data) can also be considered (7). Finally, all these indicators represent ways of ranking each scholarly journal within the moving world of science, but authors should select the journal that best matches the nature and potential readership of their research, considering the mission statement of the journal, the author guidelines, the composition of the editorial board, and the journal’s publishing history, in order to establish that the scientific or clinical areas of interest of the journal reflect the desired target population (33, 34). The ranking of journals (as well as of papers, authors, institutions etc.) should be done solely with the aim of improving our ability to search and do science, and not for measuring the quality of output of individuals, research groups, or universities. Citation analysis, however sophisticated it may be, cannot be a substitute for critical reading and expert judgement.

Conversely, as West et al. (14) stated, “where ranking systems provide narrow-minded administrators and faculty with an excuse to avoid hard work and deep thought, they may even be harmful to the functioning of academia”.

competing interests

FF is Associate Editor of the Journal of Rehabilitation Medicine, Senior Editor of the European Journal of Physical and Rehabilitation Medicine (formerly Europa Medicophysica), and a member of the Editorial Board of the International Journal of Rehabilitation Research; Portuguese Journal of Physical and Rehabilitation Medicine; Rehabilitacija; Physikalische Medizin, Rehabilitationsmedizin, Kurortmedizin/Journal of Physical and Rehabilitation Medicine (Stuttgart); and Giornale Italiano di Medicina del Lavoro e Ergonomia. SML has no competing interest to declare.

REFERENCES

Comments

Do you want to comment on this paper? The comments will show up here and if appropriate the comments will also separately be forwarded to the authors. You need to login/create an account to comment on articles. Click here to login/create an account.