h-index

Source: Wikipedia, the free encyclopedia.
(Redirected from
Hirsch index
)

The h-index is an

scholarly journal[2] as well as a group of scientists, such as a department or university or country.[3] The index was suggested in 2005 by Jorge E. Hirsch, a physicist at UC San Diego, as a tool for determining theoretical physicists' relative quality[4]
and is sometimes called the Hirsch index or Hirsch number.

Definition and purpose

h-index from a plot of numbers of citations for an author's numbered papers (arranged in decreasing order)

The h-index is defined as the maximum value of h such that the given author/journal has published at least h papers that have each been cited at least h times.[4][5] The index is designed to improve upon simpler measures such as the total number of citations or publications. The index works best when comparing scholars working in the same field, since citation conventions differ widely among different fields.[6]

Calculation

The h-index is the largest number h such that h articles have at least h citations each. For example, if an author has five publications, with 9, 7, 6, 2, and 1 citations (ordered from greatest to least), then the author's h-index is 3, because the author has three publications with 3 or more citations. However, the author does not have four publications with 4 or more citations.

Clearly, an author's h-index can only be as great as their number of publications. For example, an author with only one publication can have a maximum h-index of 1 (if their publication has 1 or more citations). On the other hand, an author with many publications, each with only 1 citation, would also have an h-index of 1.

Formally, if f is the function that corresponds to the number of citations for each publication, we compute the h-index as follows: First we order the values of f from the largest to the lowest value. Then, we look for the last position in which f is greater than or equal to the position (we call h this position). For example, if we have a researcher with 5 publications A, B, C, D, and E with 10, 8, 5, 4, and 3 citations, respectively, the h-index is equal to 4 because the 4th publication has 4 citations and the 5th has only 3. In contrast, if the same publications have 25, 8, 5, 3, and 3 citations, then the index is 3 (i.e. the 3rd position) because the fourth paper has only 3 citations.

f(A)=10, f(B)=8, f(C)=5, f(D)=4, f(E)=3 → h-index=4
f(A)=25, f(B)=8, f(C)=5, f(D)=3, f(E)=3 → h-index=3

If we have the function f ordered in decreasing order from the largest value to the lowest one, we can compute the h-index as follows:

h-index (f) =

The Hirsch index is analogous to the

Ky Fan metric.[8] The h-index serves as an alternative to more traditional journal impact factor metrics in the evaluation of the impact of the work of a particular researcher. Because only the most highly cited articles contribute to the h-index, its determination is a simpler process. Hirsch has demonstrated that h has high predictive value for whether a scientist has won honors like National Academy membership or the Nobel Prize. The h-index grows as citations accumulate and thus it depends on the "academic age
" of a researcher.

Input data

The h-index can be manually determined by using citation databases or using automatic tools. Subscription-based databases such as

high energy physics
.

Each database is likely to produce a different h for the same scholar, because of different coverage.

Boolean logic when combining search terms.[14] For example, the Meho and Yang study found that Google Scholar identified 53% more citations than Web of Science and Scopus combined, but noted that because most of the additional citations reported by Google Scholar were from low-impact journals or conference proceedings, they did not significantly alter the relative ranking of the individuals. It has been suggested that in order to deal with the sometimes wide variation in h for a single academic measured across the possible citation databases, one should assume false negatives in the databases are more problematic than false positives and take the maximum h measured for an academic.[15]

Examples

Little systematic investigation has been done on how the h-index behaves over different institutions, nations, times and academic fields.

United States National Academy of Sciences.[17] Hirsch estimated that after 20 years a "successful scientist" would have an h-index of 20, an "outstanding scientist" would have an h-index of 40, and a "truly unique" individual would have an h-index of 60.[4]

For the most highly cited scientists in the period 1983–2002, Hirsch identified the top 10 in the life sciences (in order of decreasing h):

Robert C. Gallo, h = 154; Pierre Chambon, h = 153; Bert Vogelstein, h = 151; Salvador Moncada, h = 143; Charles A. Dinarello, h = 138; Tadamitsu Kishimoto, h = 134; Ronald M. Evans, h = 127; and Ralph L. Brinster, h = 126. Among 36 new inductees in the National Academy of Sciences in biological and biomedical sciences in 2005, the median h-index was 57.[4] However, Hirsch noted that values of h will vary among disparate fields.[4]

Among the 22 scientific disciplines listed in the

Numbers are very different in social science disciplines: The Impact of the Social Sciences team at London School of Economics found that social scientists in the United Kingdom had lower average h-indices. The h-indices for ("full") professors, based on Google Scholar data ranged from 2.8 (in law), through 3.4 (in political science), 3.7 (in sociology), 6.5 (in geography) and 7.6 (in economics). On average across the disciplines, a professor in the social sciences had an h-index about twice that of a lecturer or a senior lecturer, though the difference was the smallest in geography.[19]

Advantages

Hirsch intended the h-index to address the main disadvantages of other bibliometric indicators. The total number of papers metric does not account for the quality of scientific publications. The total number of citations metric, on the other hand, can be heavily affected by participation in a single publication of major influence (for instance, methodological papers proposing successful new techniques, methods or approximations, which can generate a large number of citations). The h-index is intended to measure simultaneously the quality and quantity of scientific output. Until 2010 the h-index showed a Kendall's correlation of 0.3 to 0.4 with scientific awards.[20]

Criticism

There are a number of situations in which h may provide misleading information about a scientist's output.[21] The correlation between h-index and scientific awards dropped significantly since 2010 after the widespread usage of h-index,[20] following Goodhart's law. The decrease of correlation is partially attributed to the spread of hyperauthorship with more than 100 coauthors per paper.

Some of the following failures are not exclusive to the h-index but rather shared with other author-level metrics:

  • The h-index does not account for the number of authors of a paper. In the original paper, Hirsch suggested partitioning citations among co-authors. One such fractional index is known as h-frac, which accounts for multiple authors but is not widely available through the use of automatic tools.[20]
  • The h-index does not account for the different typical number of citations in different fields, e.g. experimental over theoretical. Citation behavior in general is affected by field-dependent factors,[22] which may invalidate comparisons not only across disciplines but even within different fields of research of one discipline.[23]
  • The h-index discards the information contained in author placement in the authors' list, which in some scientific fields is significant though in others it is not.[24][25]
  • The h-index is a integer, which reduces its discriminatory power. Ruane and Tol therefore propose a rational h-index that interpolates between h and h + 1.[26]

Prone to manipulation

Weaknesses apply to the purely quantitative calculation of scientific or academic output. Like other metrics that count citations, the h-index can be manipulated by coercive citation, a practice in which an editor of a journal forces authors to add spurious citations to their own articles before the journal will agree to publish it.[27][28] The h-index can be manipulated through self-citations,[29][30][31] and if based on Google Scholar output, then even computer-generated documents can be used for that purpose, e.g. using SCIgen.[32] The h-index can be also manipulated by hyperauthorship. Recent research shows clearly that the correlation of the h-index with awards that indicate recognition by the scientific community has substantially declined.[33]

Other shortcomings

The h-index has been found in one study to have slightly less predictive accuracy and precision than the simpler measure of mean citations per paper.[34] However, this finding was contradicted by another study by Hirsch.[35] The h-index does not provide a significantly more accurate measure of impact than the total number of citations for a given scholar. In particular, by modeling the distribution of citations among papers as a random integer partition and the h-index as the Durfee square of the partition, Yong[36] arrived at the formula , where N is the total number of citations, which, for mathematics members of the National Academy of Sciences, turns out to provide an accurate (with errors typically within 10–20 percent) approximation of h-index in most cases.

Alternatives and modifications

Various proposals to modify the h-index in order to emphasize different features have been made.[37][38][39][40][41][42][20] Many of these variants, such as g-index, are highly correlated with the original h-index and hence redundant.[43] One metric, which currently is not highly correlated with h-index and is correlated with scientific awards is h-frac.[20]

Applications

Indices similar to the h-index have been applied outside of author or journal evaluation.

The h-index has been applied to Internet Media, such as YouTube channels. It is defined as the number of videos with ≥ h × 105 views. When compared with a video creator's total view count, the h-index and g-index better capture both productivity and impact in a single metric.[44]

A successive Hirsch-type index for institutions has also been devised.[45][46] A scientific institution has a successive Hirsch-type index of i when at least i researchers from that institution have an h-index of at least i.

See also

References

  1. S2CID 31323195
    .
  2. ^ Suzuki, Helder (2012). "Google Scholar Metrics for Publications". googlescholar.blogspot.com.br.
  3. PMID 21839937
    .
  4. ^ .
  5. PhysOrg
    . Retrieved 13 May 2010.
  6. ^ "Impact of Social Sciences – 3: Key Measures of Academic Influence". LSE Impact of Social Sciences Blog (Section 3.2). London School of Economics. 19 November 2010. Retrieved 19 April 2020.
  7. . Retrieved 2022-09-17.
  8. .
  9. ^ Google Scholar Citations Help, retrieved 2012-09-18.
  10. S2CID 29641074
    .
  11. .
  12. . (preprint of paper published as 'Impact of data sources on citation counts and rankings of LIS faculty: Web of Science versus Scopus and Google Scholar', in Journal of the American Society for Information Science and Technology, Vol. 58, No. 13, 2007, 2105–25)
  13. ..
  14. .
  15. .
  16. .
  17. ^ Peterson, Ivars (December 2, 2005). "Rating Researchers". Science News. Retrieved 13 May 2010.
  18. ^ a b c d "Citation Thresholds (Essential Science Indicators)". Science Watch. Thomson Reuters. May 1, 2010. Archived from the original on 5 May 2010. Retrieved 13 May 2010.
  19. ^ "Impact of Social Sciences – 3: Key Measures of Academic Influence". Impact of Social Sciences, LSE.ac.uk. 19 November 2010. Retrieved 14 November 2020.
  20. ^
    PMID 34181681. Our results suggest that the use of the h-index in ranking scientists should be reconsidered, and that fractional allocation measures such as h-frac provide more robust alternatives. Companion webpage
  21. .
  22. .
  23. .
  24. .
  25. .
  26. .
  27. .
  28. .
  29. .
  30. .
  31. .
  32. ^ Labbé, Cyril (2010). Ike Antkare one of the great stars in the scientific firmament (PDF). Laboratoire d'Informatique de Grenoble RR-LIG-2008 (technical report) (Report). Joseph Fourier University.
  33. PMID 34181681
    .
  34. .
  35. .
  36. .
  37. .
  38. .
  39. .
  40. ^ Katsaros D., Sidiropoulos A., Manolopous Y., (2007), Age Decaying H-Index for Social Network of Citations in Proceedings of Workshop on Social Aspects of the Web Poznan, Poland, April 27, 2007
  41. ISSN 0138-9130
    .
  42. .
  43. .
  44. .
  45. ^ Kosmulski, M. (2006). "I – a bibliometric index". Forum Akademickie. 11: 31.
  46. ^ Prathap, G. (2006). "Hirsch-type indices for ranking institutions' scientific research output". Current Science. 91 (11): 1439.

Further reading

External links