Most of us in academic positions have been promoted mainly based on the number of our publications, particularly those appearing in peer-reviewed journals. A problem with this type of evaluation is that it is subjective: it generally measures quantity more than quality. Additionally, other sources that could be referenced (such as book chapters) are not taken into account. Because of this, some institutions use the citation index from the Institute for Scientific Information (ISI), which is famous for journal Impact Factors.1 Objective measurements offer a potential appraisal of the importance of publications, but the citation index has its own problems (inclusion of self-citations, lack of relationship to quality, etc). In the past few years, other metrics have been described that attempt to quantify the quality of articles published more objectively.
In 2005, John Hirsch a physicist from the University of California in San Diego, proposed a new measurement (now called the h-index) that is based on a set of a person's most cited articles and the number of citations these articles receive.2 For example, if you have published 20 articles and each has been cited 20 times, your h-index is 20. Hirsch suggested that based on this index, individuals with values over 18 could be promoted to full professors, at least in the area of physics. The h-index has not been used extensively in medicine but it is inching its way along to form a part of the data required by some tenure/promotions committees.3 Some important criticisms of the h-index include the fact that it concentrates on overall citations and neglects the importance of single contributions, does not consider the context of the citations (whether these are in highly respected journals or otherwise), does not account for the length of the authors' byline, and that it is directly related to an author's number of publications.4 Like many of these metrics, the h-index is also affected by the accuracy of the citation data base used for its calculation.
An important aspect of the h-index is that while it may be used to compare individuals within the same discipline, comparison among disciplines, even those that are closely related, is not optimum. (For example, it would not be fair to use it to compare academic output of neuroradiologists to those of neurosurgeons, or, even among radiologists to those with different subspecialties). The h-index is discipline-size dependent; individuals in highly specialized, small fields have lower h-indices. The h-index also increases according to academic rank (in neurosurgery about five points per rank).3 In neurosurgery, the persons with the highest h-indices are the Chairs. This may be due to the fact that the h-index is also related to the amount of time spent in a discipline. (The longer the time, the more citations one's papers will get.)
H-indices may be obtained by using information available in the ISI Web of Knowledge, Scopus, and Google Scholar data bases. A critical factor in using Google Scholar is that in many cases it indexes the names of only the first and last authors. This is the basis of the current recommendation: if you did most of the work you should be listed first, and if you were the second person who did most of the work, you should be listed as the last author! Remember, too, that many publications submitting to Google Scholar use initials rather than first names, so searches may yield articles from a variety of authors who share first initials and last names. Despite these caveats, comparison of h-indices obtained by using Google Scholar and Scopus correlate highly.3 Google Scholar is probably more inclusive than ISI's Web of Science and thus it may result in a better h-index calculation. (Although this is true for engineering, business, and social sciences, it is questionable for the health sciences.) However, Google Scholar does not totally capture articles in languages other than English (so-called LOTE articles) and citations in chapters and books, and therefore it may underestimate h-indices.
Although there are other indices, the h-index, at least for now, provides a robust single metric that combines quality and quantity. Attempts to normalize the h-index across disciplines have met with mixed results. The g-index aims to improve it by giving more weight to highly cited articles.5 The e-index strives to differentiate between scientists with similar h-indices with different citation patterns.6 For those who are interested in assessing a constant level of academic activity, the contemporary h-index gives more weight to recent articles.7 The AW-index (age-weighted) adjusts for the age of each individual paper. (The older you are the higher your h-index will be.)8 The multi-authored h-index modifies the original metric by taking into account shared authorship of articles. Though all are improvements on the initial h-index, the original metric is still the most widely used.
I decided to calculate the h-index for the senior neuroradiology editors for the American Journal of Neuroradiology (AJNR) and 2 other major imaging journals (American Journal of Roentgenology [AJR] and Radiology). I did not calculate the h-index for the Editors-in-Chief as our subspecialties and ages vary, and, as earlier stated, these are caveats. (To be fair, I need to state that the Editor of Radiology has the highest h-index of the 3 of us.) For these calculations, I used the Harzing Publish or Perish software, which is freely available on the Internet9 and also ISI Web of Knowledge.1 In my search I included all fields related to imaging by using the last names of these individuals followed by their first and middle initials. Using the Harzing software, when individuals with similar names ensued, I manually selected only the articles desired. Although my calculations may be fraught with some errors, I think that they provide an adequate overview of the utility of the h-index. Figure 1 shows the h-indices of all Senior Editors by using the Harzing method; the person with the highest index works for AJNR, closely followed by the individuals in AJR and Radiology. Note that all individuals showed scores of 20 or higher, which are considered to be very good. Using the ISI Web of Knowledge method, similar trends were observed though most scores were a bit lower (Fig 2). If one averages the h-indices of Senior Editors per journal by using the Harzing and ISI methods, the results are as follows: AJNR (34 and 32), AJR (30 and 24.5), and Radiology (32.5 and 28); again very similar among journals but slightly lower by using the second method. Scores tend to be higher using the Harzing method because it utilizes Google Scholar data which are more inclusive than those found in the ISI database. Because the h-index tends to be higher for older individuals, it was not surprising that the highest scores were for the more senior of the editors.
H-indices of individual Senior Editors by Journal by using Publish or Perish. Blue = AJNR, Gray = Radiology, Yellow = AJR
H-indices of individual Senior Editors by Journal by using Web of Knowledge. Blue = AJNR, Gray = Radiology, Yellow = AJR
Needless to say, I was gratified to find out how well AJNR did when compared with such respected journals as AJR and Radiology. Our contributors and readers can rest assured that AJNR's contents are being handled by the most qualified neuroradiologists.
Acknowledgments
I thank Dr. H. Y. Kressel who pointed out some mistakes in my initial calculations that lead to re-writing of this commentary.
References
- Copyright © American Society of Neuroradiology