Home About us Editorial board Ahead of print Current issue Search Archives Submit article Instructions Subscribe Contacts Login 
  • Users Online: 1704
  • Home
  • Print this page
  • Email this page


 
 Table of Contents  
SYMPOSIUM: RESEARCH AND ACADEMIA
Year : 2016  |  Volume : 2  |  Issue : 2  |  Page : 187-202

Competing for impact and prestige: Deciphering the “alphabet soup” of academic publications and faculty productivity metrics


1 Department of Research and Innovation, St. Luke's University Health Network, Bethlehem, Pennsylvania, USA
2 Heart and Vascular Center, St. Luke's University Health Network, Bethlehem, Pennsylvania, USA
3 Temple University School of Medicine, St. Luke's University Hospital Campus, Bethlehem, Pennsylvania, USA
4 W. L. Estes Memorial Library, St. Luke's University Health Network, Bethlehem, Pennsylvania, USA
5 Department of Anesthesiology, University of Toledo, Toledo, Ohio, USA

Date of Submission02-Jun-2016
Date of Acceptance25-Jun-2016
Date of Web Publication28-Dec-2016

Correspondence Address:
Stanislaw P Stawicki
Department of Research and Innovation, St. Luke's University Health Network, EW2 – Research Administration, 801 Ostrum Street, Bethlehem, Pennsylvania 18015
USA
Login to access the Email id

Source of Support: None, Conflict of Interest: None


DOI: 10.4103/2455-5568.196875

Rights and Permissions
  Abstract 


Accurate quantification of scholarly productivity continues to pose a significant challenge to academic medical institutions seeking to standardize faculty performance metrics. Numerous approaches have been described in this domain, from subjective measures employed in the past to rapidly evolving objective assessments of today. Metrics based on publication characteristics include a variety of easily categorized, normalized, referenced, and quantifiable data points. In general, such measures can be broadly grouped as being author-, manuscript-, and publication/journal-specific. Commonly employed units of measurement are derived from the number of publications and/or citations, in various combinations and derivations. In aggregate, these metrics are utilized to more objectively assess academic productivity, mainly for the purpose of determining faculty promotion and tenure potential; evaluating grant application/renewal competitiveness; journal/publication, and institutional benchmarking; faculty recruitment, retention, and placement; as well as various departmental and institutional performance assessments. This article provides an overview of different measures of academic productivity and scientific impact, focusing on bibliometric data utilization, including advantages and disadvantages of each respective methodological approach.
The following core competencies are addressed in this article: Interpersonal skills and communication, practice-based learning and improvement, systems-based practice.

Keywords: Academic productivity metrics, bibliometric indices, impact factor, promotion and tenure


How to cite this article:
Ranjan A, Kumar R, Sinha A, Nanda S, Dave KA, Collette MD, Papadimos TJ, Stawicki SP. Competing for impact and prestige: Deciphering the “alphabet soup” of academic publications and faculty productivity metrics. Int J Acad Med 2016;2:187-202

How to cite this URL:
Ranjan A, Kumar R, Sinha A, Nanda S, Dave KA, Collette MD, Papadimos TJ, Stawicki SP. Competing for impact and prestige: Deciphering the “alphabet soup” of academic publications and faculty productivity metrics. Int J Acad Med [serial online] 2016 [cited 2022 Nov 29];2:187-202. Available from: https://www.ijam-web.org/text.asp?2016/2/2/187/196875




  Introduction Top


The ability to quantify research productivity is becoming increasingly important in the highly competitive and ever more complex environment of modern academic medicine.[1],[2],[3],[4] Although the number of categories and items considered to be reflective of academic productivity continues to expand [Table 1], two general types of academic work can be considered to collectively constitute the “gold standard” for performance evaluation, promotion, and tenure – extramural funding and publications.[2],[3],[4],[5],[6] Due to the rapidly changing landscape of bibliometric analysis, significant confusion persists regarding the “alphabet soup” of competing (and often overlapping) indices and assessment tools.[7],[8],[9]
Table 1: List of selected article-, journal-, and author-specific metrics mentioned throughout the manuscript. Many of these metrics form the basis for promotion and tenure determinations. Nontraditional metrics (e.g., teaching, social media, etc.) are not shown

Click here to view


Broadly speaking, the term “bibliometrics” refers to the use of quantitative approaches to measure and track publication data using various document-, author-, or source-level (e.g., journal-level) elements. Within this collected body of data, one is then able to define specific characteristics, patterns, and relationships that help demonstrate an investigator's or research team's productivity, contribution quality, and/or scientific impact.[10] Publication metrics can be used for a variety of purposes, including faculty tenure and promotion determinations,[11] grant applications and renewals,[12],[13] productivity benchmarking,[14] talent recruitment and retention efforts,[15] as well as different administrative purposes (e.g., departmental or institutional performance reports).[16],[17] Despite their widespread use, there is significant confusion about many of these metrics, especially when one considers the gravity of some of the strategic decisions (e.g., talent management and resource allocation) made based on quantitative bibliometric data.[18],[19]

The overarching goal of this manuscript is to provide a high-level overview of established and emerging bibliometric indices focusing on their principal uses, advantages, disadvantages, and alternatives. Familiarity with existing publication metrics is essential to all stakeholders in academic medicine, especially when measuring and reporting scholarly productivity and scientific impact of academic clinicians, departments, institutions, or research groups. Thorough understanding of key bibliometric indices will be increasingly important for individual faculty members, department leaders, review committees, funding agencies, and journal editors due to greater awareness of the relationship between academic productivity, scientific impact, and the subsequent diffusion/synthesis of knowledge into clinical applications.


  Citation Indexing Services Top


This section will outline similarities and differences between major providers of bibliographic citation indices, with subsequent discussion mentioning some of the less prominent actors in this domain. Since none of the citation databases are truly “all inclusive,” one should utilize multiple databases to achieve optimal results. In general, scientific citation indices can be a powerful source of information about author/publication impact and help establish the foundation of more advanced analyses, such as author mapping, expertise clustering, and focused impact of specific work.[20] Due to the multitude and complexity of existing bibliometric indices, we will avoid using abbreviations or acronyms, except when referring to major citation/indexing services that are commonly known by such unique and specific acronyms.

The Institute for Scientific Information

Traditionally, citation indexing has been dominated by the Institute for Scientific Information (ISI), which is now part of the Thomson Reuters media conglomerate.[21] The ISI publishes its citation indices in various media formats, most commonly available on the Internet under the name “Web of Science.”[22] Web of Science is a subscription-based service that provides access to the following databases: Science Citation Index, Social Sciences Citation Index, Arts and Humanities Citation Index, Index Chemicus, Current Chemical Reactions; Conference Proceedings Citation Index: Science; as well as Conference Proceedings Citation Index: Social Science and Humanities.[23] Web of Science features both complex and focused search options, the ability to filter and refine queries, and the option to analyze the results.

Scopus

This subscription-based indexing service, available online, is operated by the global academic publishing house, Elsevier.[24] Scopus is one of the largest abstract and citation databases of peer-reviewed literature and web resources. It also includes a variety of “smart tools” to track, analyze, and visualize research content and search results.[24],[25]

CiteSeer

This service can be considered to be both “citation engine” and “digital library.” CiteSeer is based on the SmealSearch (now BizSeer) engine [26] and provides citation data, citation graph analysis, and document retrieval capabilities. Research Papers in Economics [27] maintains databases in economics and related fields.

Google scholar

This increasingly influential service in the bibliometrics space [28] provides citation and search capabilities for scholarly literature across virtually all indexed disciplines and sources. It is based on a freely-accessible web search engine that contains an ever-accreting number of citations, fulltexts, nontraditional sources (e.g., government documents, nonindexed journals, books, dissertations, end-user content), and other knowledge repositories (e.g., open archives initiative resources). Google Scholar is known for its speed and ability to scan through actual manuscript content in near real-time. This increases the relevance of search results and provides the user with context-specific output. In addition to basic search features, registered users can also set up individual profiles so that continuous tracking of author-specific metrics (e.g., h-index, i10 index, citations per manuscript, as well as citations per year) become available. Such profiles can then be made public and shared across existing research “virtual networks” (see subsequent sections of the manuscript). Finally, special add-ons exist for various web browsers that help facilitate customized search capabilities, including real-time determinations of various citation-based indices (e.g., h-index, e-index, g-index, etc.).[29],[30]

EBSCO host

This platform is among one of the most widely used premium reference database services.[31],[32] Of note, EBSCO host is an agglomeration of various repositories, only a few of which offer formal citation analysis. Compendex (Engineering Index, COMPuterized ENgineering inDEX) is one of the most comprehensive engineering literature databases.[33],[34],[35]


  Publication-Based Metrics of Academic Productivity Top


After outlining some of the fundamental tools, platforms, and principles of bibliometrics and indexing, we will turn our attention to the discussion of specific publication-based academic productivity measures. In aggregate, these metrics are largely based on outcome variables derived from data provided by indexing services outlined in previous sections. Subsequent discussion will begin with well-established, simpler measures and will gradually progress toward more advanced and complex topics in this area.

Number of research papers

A very simple and easily quantifiable measure of research productivity (and impact) can be the number of research papers published by an author.[36] In general, the more prolific the author, the greater their scientific impact. However, this approach has a number of important limitations. Academic clinicians may become tempted to bolster their apparent research output by resorting to double or redundant publication, questionably meritorious submissions, self-plagiarism, and reporting based on the so-called “minimal publishable unit.”[37],[38] Some authors are also willing to sacrifice “publication quality” for “publication quantity.” This, in turn, may lead to lower than expected impact. Collectively, the above phenomena increase the complexity of the peer review process while reducing the per-manuscript “information yield” for busy bedside clinicians who are often overwhelmed with the totality of available information. Consequently, a simple tally of the number of publications authored or coauthored by a single academic clinician is arguably a poor method to assess true research productivity and/or impact. One additional metric related to peer-reviewed journal articles is the number of original research articles versus review articles (or other publication types – e.g., case reports, communications, editorials, and letters).[39] Original manuscripts represent the primary sources of knowledge that are based on research whereas review articles (and other “derivative” publications) serve as secondary sources or “processed” knowledge on a specific subject.

Authorship order or author “status”

It is an accepted practice in academia that individuals listed first or last on the author by-line are recognized as having contributed the bulk of the work toward project completion and manuscript publication.[40],[41] This measure may be most relevant when assessing manuscripts with limited number of authors, mainly because authorship effort attribution is relatively straight forward in such cases. However, this method is less likely to be a true expression of academic productivity when evaluating multicenter or multiauthor publications. In such cases, it is most optimal to utilize the estimates of per-author percentage effort to determine the actual level of contribution.[42] Along the same lines, some institutions require that authors state their estimated “percent effort” pertaining to each publication submitted during promotion and tenure considerations. Among other metrics within this general category of productivity assessments is the percentage of manuscripts accepted (e.g., the relationship between total number of manuscript submissions and acceptances for a specific author).[43] Although primarily reflective of the quality of work submitted, the latter measure may also be skewed by factors such as the author's reputation and the overall impact of journals to which the work was submitted.

Publication sources

Regarding a more broad assessment of an author's impact, one can look at the number of peer-reviewed journals in which an academic clinician has published. Documented record of contributions to journals in various specialty areas (and impact) are indicative of thematic diversity, collaborative efforts, and overall depth of scientific expertise. Such diversification can be used to create a compelling narrative of interdisciplinary or translational research efforts. Conversely, an author who publishes exclusively or nearly exclusively in a small number of subspecialty journals may be seen to have created the “narrow and deep niche” that academia and traditional funding sources typically covet.[44]

Total and per-article citations

An author's total number of citations is a general, but by no means absolute, guide to their academic productivity.[45] When looking at this particular metric, one must remember that both scientific impact and the number of career citations may vary significantly across disciplines, highlighting the need for data normalization. However, it is reasonable to say that the number of citations attributed to a particular article can be a proxy for the work's overall quality.[46],[47] Some of the most widely used bibliometric indices such as impact factor and the h-index have been built around measuring the number of citations.[11],[47],[48] However, concerns have been raised about the validity and utility of such citation-based systems. First, older publications have more time to accrue citations than newer manuscripts, resulting in potential for bias if this is not normalized or otherwise corrected. Second, early reports of scientific findings, which at the time of initial publication may be at odds with the broadly held beliefs or expectations of the scientific community, are often not cited until some years have passed. This phenomenon is known as the “Mendel effect.”[49] Third, manuscript impact can be subject to some degree of manipulation by deliberate self-citation by the primary author or reciprocal citations by colleagues (as opposed to true, unbiased scientific impact).[50],[51]


  Publication Metrics Based on Impact Top


Journal impact factor score

An important and predictive measure of research impact is the journal impact factor (JIF) of the publication in which the manuscript appears.[52],[53] The JIF is computed for a specific journal/publication by ascertaining the average number of citations per article per year. As such, the JIF can be used as an indication of the relative influence of a journal within its field, where journals with higher impact factors are deemed to be more influential or prestigious than those with lower impact factors. Simple JIF calculations are generally compiled on annual basis.[54],[55] However, different iterations of the impact factor exist, with different time horizons involved (e.g., 3-year, 5-year, etc.).[56]

The JIF or Journal Citation Reports (JCRs) score is derived by dividing all journal-specific citations in the JCR during a given year by the total number of articles published by that journal in the two previous years. For example, a JCR impact factor score of 2.0 means that, on average, articles published in a specific journal 1 or 2 years ago have been cited twice.[57] One major flaw in this paradigm is that the JCR impact factor score does not provide significant insight regarding a specific manuscript or its author(s).[58] Rather, it is a unit of analysis based on the journal as a whole and only a secondary reflection of author-based performance or impact (e.g., one could speculate that higher impact factor journal is generally associated with higher quality of both “authorship and science”). In an era of both instantaneous and free access to scientific information, the true value of “impact” is determined by researchers within the “free and open market” of research ideas. Of note, the impact factor is also limited to journals indexed by the Thomson Reuters Web of Science database (or approximately 15% of all existing journal titles). Finally, the JCR impact factor score encompasses citations only from the previous 2 years whereas the full impact of an individual publication is often measured over decades,[59] with some articles only “noticed” by the scientific community after specific conditions for the emergence of such interest materialize.

Another limitation to this methodology is that both journals and authors can manipulate the JCR impact factor by intentional self-citations and by encouraging peer reviewers to suggest that authors consider additional source citations from that same journal during a specified time frame. In addition, a journal can adopt editorial policies to increase its impact factor without necessarily increasing the quality of the science it publishes. For example, certain journals may publish a larger percentage of review articles which generally are cited more than original research reports or cases. Thus, review articles can raise the impact factor of the journal and “review journals” will, therefore, tend to have relatively higher impact factors in their respective fields. Finally, some journal editors set their submissions policy to “by invitation only”, with strong preference toward senior scientists publishing “high impact” or “likely to be cited” papers that increase the JIF.[60],[61],[62]

Indices related to impact factor

The immediacy index measures the average number of times an article, published in a particular journal during a specific year, is cited over the course of the same year.[63] Cited half-life measures the number of years, going back from the current year, that account for half of the total citations received by the cited journal in the current year.[64] For example, if a journal's cited half-life in 2005 is 5, it means that citations from 2001 to 2005 account for half of all the citations from that journal and the other half of measured citations precede 2001. Aggregate impact factor for a subject category is calculated by taking into account the number of citations to all journals in the subject category and the number of articles from all the journals in the category.[65] This particular measure is important to consider when differences between scientific specialties and subspecialties need to be factored into promotion and tenure deliberations (e.g., when one discipline tends to have higher/lower aggregate impact than other disciplines). Key concepts related to the immediacy index, the impact factor, and cited half-life are demonstrated graphically in [Figure 1].
Figure 1: Graphical representation of the concepts of (a) immediacy index, impact factor, and (b) cited half-life. In this example, the journal has a cited half-life of 8 years

Click here to view


The h-index

The h-index, sometimes called the Hirsch index or “Hirsch number,” was first developed by Hirsch [47] as a method to quantify the impact and quality of the published work of a particular scientist or scholar. In this paradigm, a scientist has h-index of “h” if “h” of his/her “n” papers have at least “h” citations each, and the other (“n – h”) papers have not more than “h” citations each [Figure 2]. In other words, an author with an index of “h” has published “h” manuscripts, each of which has been cited in other papers at least “h” times. In a practical example, if an author's h-index is 15, the academician has 15 papers that were cited 15 times or more. If the h-index is 20, one must have at least 20 papers, each cited 20 times or more.
Figure 2: Graphical representation of the “h-index,” (a) the graph on the left shows the academic record for an author with only one publication and one associated citation, (b) the graph on the right shows the academic record for an author with at least three publications, each of which having been cited at least 3 times

Click here to view


Mathematical formula for the h-index is as follows:



where ch (j = 1, 2,…, h,…, n) denotes the citation records of the j th publication in a list ranked in nonincreasing order of citations. Both ch and h are natural numbers. Publications cited at least “h” times are said to be in the “h-core.” Thus, the h-index is a single-number indicator for evaluating the scientific achievement of a given researcher.[66],[67] It “ignores” the long-tails of the publication (quantity) and citation (quality) distribution but focuses on where the numbers of papers and citations intersect, which signifies the “middle part” concept of the Zipf's law.[68] It assesses a scientist's performance based on the blended approach that measures both quantity and quality of his/her papers taken together.

The h-index integrates the evaluation of productivity (the number of a scientist's total publications) and impact (the influence of the papers on the scientist's peers) in a single, easy-to-compute indicator. Given the increased citation data availability and accessibility, information needed to calculate the h-index is becoming progressively easier to obtain. The index itself is relatively insensitive to both infrequently and highly cited papers, which may somewhat distort the assessment of the overall author productivity and impact relative to other approaches discussed in this manuscript. The h-index is notably free from influences of document types when compared to counting “total publications” or “total citations.”[45],[69],[70],[71],[72] Among its disadvantages, the h-index may not be an appropriate indicator for comparing performance across various fields of discipline (but it may be useful in standardizing expected academic performance by discipline). This is because disciplines that are “in demand” (e.g., hematology-oncology) tend to generate both more publications and citations than disciplines that are more “insular” in character (e.g., pediatric metabolic disorders). The h-index may also under- or over-estimate a researcher's achievement in terms of coauthorship because scientists with potentially varied levels of achievement may have the same h-index value. Consequently, relying on h-index without a broader context is not recommended when determining academic promotion and tenure. Finally, using data obtained directly from Web of Science in isolation might present another problem when calculating h-index, considering that it essentially “misses” about 80–85% of publications that might potentially be citing a particular source.[45], 69, [73],[74],[75] This particular limitation has been largely remedied by Google Scholar (and other similar open or free access knowledge repositories), with a global search capability enabling near real-time citation data reporting that is inclusive of the entire searchable Internet “publication universe.“

To overcome some of the disadvantages inherent to the h-index, modifications and adjuncts such as the g-index and e-index have been proposed.[76] It has been previously pointed out that the h-index is “insensitive” to the “tail” of papers with citations that do not reach the “h” value, yet may cumulatively account for a significant proportion of the author's scientific impact. Consequently, Egghe modified the index by replacing the idea of calculating the number of citations received by each article with the concept of calculating the total accumulated citations of the top “g” articles in the so-called g-index.[77],[78] The e-index or another “supplemental” index is a measure of impact of manuscripts that have not yet reached a particular author's “h-index threshold” as outlined above.[79] These two principal “supplementary” indices are further discussed below.

The g-index

The g-index [78] is defined as a scientist's highest natural number of publications (e.g., “g”) that together received g 2 or more citations. By examining this methodology more closely, it becomes evident that g ≥ h. Because the g-index essentially expands the “h-core,” it can better differentiate within a more varied and more inclusive citation patterns. Alternatively, the g-index can be interpreted as a scientist's highest natural number of publications (e.g., “g”) that have been cited “g or more times” on average.[80] Thus, it places more weight on highly cited publications. Formally, let cj (j = 1, 2,…) denote the citation count of the j th publication in a list ranked in nonincreasing order of citations, then the g-index will be derived as shown:



The g-index, therefore, is defined as follows: a scientist has an index number “g” when his top performing “g” papers were cited at least “g 2” times [Figure 3]. As such, the g-index is capable of highlighting papers that have the highest overall impact.
Figure 3: Graphical representation of the g-index. Modified from: Wikimedia commons (original image authored by Ael 2, published under creative commons attribution – shareAlike 3.0 Unported License; Available from: https://www.commons.wikimedia.org/wiki/File:Gindex1.jpg)

Click here to view


A higher g-index indirectly reflects that the author has “more and better papers.”[81] Egghe points out that the g-index value will always be higher than the h-index value and lower than the total publication number. The g-index compensates for one major shortcoming of the h-index, or the fact that the latter provides a rather insensitive assessment of academic productivity for authors with few and/or low-cited (or noncited) publications.[78] Thus, the g-index provides better granularity, with which one can more effectively differentiate academic performance of individual authors. Further, the g-index gives relatively more weight to one or several highly cited papers, thus better highlighting the cumulative impact of a specific author. However, similar to the h-index, the g-index values are also integers and many authors may be “classified” and “stratified” under similar g-index values without the guarantee of complete fairness, leading to similar dilemmas that were discussed under the h-index earlier. Because of the latter limitation, the “g-index” is not the best indicator when evaluating and comparing smaller cohorts of authors. Now that we have discussed the most established citation indices (e.g., the h-index and the g-index), let us discuss some of the less commonly used alternatives.

The a-index

The a-index [82] aims to achieve the same goal as the g-index while correcting for the fact that the h-index does not take the exact number of publication citations in the “h-core” into account. The a-index is defined as the average number of citations received by publications included in the “h-core.” As can be seen from the mathematical relationship, h ≤ a:



Using the a-index avoids the problem of integer-based scoring, thus allowing differentiation of academic productivity at an even greater level of granularity. Due to its derivation, the a-index value is usually higher than the g-index and generally much higher than the h-index. Furthermore, the a-index seems more capable of differentiating the relative performance of a group of scientists or institutions.

The e-index

The e-index represents “excess citations” attributable to an author's publications within the h-core.[79] It is a useful estimate of the scientific impact of authors who are in the beginning stages of their careers and have not yet been cited sufficiently to generate a noticeable increase in their h-index or g-index. Graphical representation of the e-index is shown in [Figure 4], and its mathematical formula is as follows:
Figure 4: Excess citations attributable to an author (above and beyond those represented by the h-index) can be estimated using the e-index, derived from the area “above” the “h-index area” on the graph shown. This measure of scholarly productivity is useful in estimating scientific contributions of those authors who have not yet achieved sufficient per-publication citations to meaningfully contribute to their h-index. Prolific authors in the early career stages tend to have higher e-index values

Click here to view




The h-2-index

The h-2-index [84] is an h-index variant that is biased toward more highly cited publications. It is defined as the highest natural number “j” where an author's “j” most commonly cited publications each received at least “j 2” citations. Mathematical representation of the h-2-index is:



The h-g-index

The h-g-index,[87] as the name suggests, represents a blend of h-index and g-index. It is designed to optimize advantages associated with both approaches while negating some of the potential disadvantages. The h-g-index attributable to a particular researcher's academic productivity is derived as the geometric average of that researcher's h-index and g-index, as follows:



When examining component indices, h-index ≤ h-g-index ≤ g-index. Furthermore, (h-g-index – h-index) ≤ (g index – h-g-index) (e.g., h-g-index results are mathematically closer to the h-index than to the g-index), suggesting that while the h-g-index considers citations attributable to the highly cited items (against which the h-index is relatively robust), it also diminishes the relative contribution of a single (or few) very highly cited item(s) - a known shortcoming of the g-index.

The maxprod index

The maxprod index [84] can be described as the greatest value obtained by multiplying the rank “j” by its corresponding citation count (e.g., “cj”). The mathematical expression for the maxprod index is as follows:



Inherent to the above formula, maxprod ≥ h × ch ≥ h-2. According to dos Santos Rubem, et al[86], major differences between maxprod and h-2 index can be observed in cases of atypical distributions of “cj.“

The q-2-index

The q-2-index [88] represents the geometric average of h-index and the median number of citations items within items within the h-core (i.e., the so-called m-index). This specific combination helps optimize academic productivity assessments while taking advantage of favorable characteristics associated with each component index.[89],[90] The equation for the q-2-index of an author is as follows:



From examining the equation, it is evident that the h-index ≤ q-2-index ≤ m-index and that (q-2-index - h-index) ≤ (m-index - q-2-index) (i.e., the q-2-index values will approximate the h-index more closely than the m-index).

The R-index

Another h-index derivative that regards the exact number of citations to publications in the “h-core” is the R-index.[83] It is defined as the square root of total citations received by the publications included within the “h-core.” Mathematically, one can recognize that h ≤ R:



The w-index

A more recent derivative on the h-index theme is the w-index.[85] The w-index can be described as the highest natural number “w” of publications that have been cited at least “10 × w” times each. Mathematically, “w” represents the rank of any citation record (cw) within a publication list ranked according to a nonincreasing order of citations. Therefore, the w-index formula is as follows:



The w-index has also been referred to as the “10 h-index.”[85],[86] In summary, both the w-index and h-2-index can be considered as relatively broader reflections of the “cumulative impact” of a researcher's academic output.

The “social” h-index

The “social” h-index is designed to reflect the researcher's impact on his or her scientific microcosm. Mathematically, the “social” h-index is a bit more complex than the other indices.[91] The formula for the “social” h-index SOC h (a) of an author “a” is as follows:



where A(p) denotes the set of authors of a manuscript “p,” andP(a) denotes the set of manuscripts authored by “a.” The method utilizes h(a), the h-index of author “a,” as well as the “universe” of papers that “support” the h-index of author “a” (e.g., the “h-core”).[92],[93]

The “social” h-index is one method of measuring the impact of a researcher on his or her academic sphere of influence by expanding the traditional publication “quality and quantity” considerations to include one's impact and role in furthering the careers of other scientists via collaborative and mentorship efforts.[91] Within this general paradigm, one can choose the contribution function to assign more credit for manuscripts with higher citation counts. Alternatively, one can focus more on evaluating the contribution of a paper based on the author's record at the time of publication. Finally, one can set conditions where the author is not rewarded for contributions to his or her own h-index. Unlike the original h-index, the “social” h-index can decrease over time. This occurs, for example, when a paper, which once contributed to one author's h-index, ceases to do so. However, this is highly unlikely to occur in “real-life” data and the actual measure tends to increase over time.

The notion of “socialization” of a bibliometric parameter can also be applied to other measures of academic productivity such as the g-index.[78] The paradigm can also be extended to help quantify not only the coauthors but also the indirect influence on other researchers. It is likely that if such “social” measures become more widely adopted, “clever” researchers may consciously or unconsciously start to “game” them. For example, the “social” h-index can be bolstered by adding junior researchers (who likely have fewer publications) as coauthors on the senior investigator's various projects.

The Eigenfactor

One method of evaluating the quality of a researcher's academic output is to measure the citation rate of their articles and the quality of the outlet(s) in which the articles are published.[94],[95] Over time, the evaluation of the quality of various research publications has led to more formalized ranking of research journals. Such ranking paradigms are often based on the average number of citations received on per-paper basis within a given time frame. However, journal rankings may also be produced via review processes that include panels of experts or an accreditation/certification body. The evaluation of journal quality based on citation rates during a specific time period is common, but as discussed in earlier sections, it does have some drawbacks. For example, this traditional method of using citation data to measure overall journal quality does not take into account the characteristics of the citing journals or the specialty area of research. Differences in citation practices among disciplines often mean that high-quality journals may erroneously and/or unintentionally have their quality rankings underestimated (and vice-versa for low-quality journals in high-impact specialties). Accurate journal ranking is important for authors (e.g., when identifying high-quality research journals for article submissions) and for higher education providers (e.g., when deciding on journal subscription purchases). It is important that educational and scientific content selection focuses on maximizing value relative to the amount of money spent by end-users. This is where the Eigenfactor project attempts to provide practical, meaningful, and actionable bibliometric journal ranking information.[96],[97]

Eigenfactor scores for journals are derived in much the same way that Google's PageRank scores are calculated for webpages.[98] A webpage's PageRank will improve if it has lots of links to-and-from other webpages, and even more so if those links are from pages that also have a high PageRank. The Eigenfactor algorithm assigns importance to a journal, which in effect provides a weighting to the citations received from that particular journal. The “importance” of one journal is determined based on the quality of other journals that cite it, and the quality of those journals is determined based on the quality of the journals that cite them, and so on. Therefore, the Eigenfactor score is based on a “universe” of interlinked and interdependent publications, and is using much more than the simple one-to-one “citing and cited journal” algorithms.[99] Its uniqueness provides an opportunity to utilize a large network of citations “within a field of research” as well as “between fields of research“. Within such network, citations from journals considered very important to a particular area of expertise will carry more weight than citations from journals considered less important to that field of research. The Eigenfactor score can be considered both a measure of the importance of a journal to the scientific community and an estimate of the amount of time a research journal “consumer” is likely to spend actively using that journal when researching a topic. With the assignment of relative importance to journals and the connections mapped through the research network, the Eigenfactor becomes a robust measure of journal quality that is much less susceptible to the variations in citation patterns across different fields of research.[94],[98] In addition to the Eigenfactor score, there is also the article influence score, which is a measure of the average “per article” impact attributable to a particular journal.[94],[98] Article influence is based on per-article citations and is, therefore, comparable to the JCR impact factor.

The citation data for the Eigenfactor project are sourced from the Thomson Reuters' JCR.[100],[101] Eigenfactor currently uses available JCR information dating back to 1995.[48] Because these data are sourced from the JCR, in addition to the Eigenfactor subject categories, scientists also have the option of using the more familiar JCR subject categories when searching and browsing. The data are not restricted to just refereed journals, but also include references cited by other JCR-listed publications (e.g., theses, news, magazines, etc.). While the Eigenfactor journal quality assessment shares many similarities with the JCR impact factor, there are some important distinctions. For example, the Eigenfactor is calculated based on citations to articles in the last 5 years compared to the JCR impact factor, which is calculated on the basis of the preceding 2 years. Articles tend to be relatively poorly cited within the initial 2 years of their publication. Consequently, the longer time frame of the Eigenfactor provides somewhat more meaningful results, especially for those disciplines in which citations take longer to accumulate.

A novel feature available through the Eigenfactor website is journal price data from http://www.journalprices.com.[102],[103] Although journal prices themselves are not incorporated into the calculations used to derive either the Eigenfactor or article influence scores, pricing data are matched with bibliometric variables to estimate the journal's “scientific value” relative to its subscription price. Thus, combining the Eigenfactor scores with journal price data gives an indication of a journal's quality in terms of value provided to the consumer or scientist at a given price-point. Furthermore, the Eigenfactor incorporates journal pricing data with economic value assigned to both Eigenfactor score and article influence score. This, in turn, helps authors, readers, and institutions determine the most cost-effective approaches to knowledge dissemination and utilization.[104]

In addition to being free and easy to use, Eigenfactor has other advantages over traditional citation metrics, including a 5-year evaluation period and the Eigenfactor algorithm itself, which reduces discipline bias and produces a more meaningful assessment of the publication's value in terms of citations. The Eigenfactor website is very transparent regarding the methodology behind relevant calculations and provides links to a number of literature sources on the subject, making it easier to understand the process.[105]

The i10 index (and other in-indices)

The i10 index is utilized by Google Scholar citation web service.[106],[107],[108] It measures the number of articles with 10 or more citations and is designed to supplement the h-index as a secondary assessment of academic productivity. It may be useful in identifying authors who are prolific enough to produce a large number of publications; however, such publications have not yet had sufficient time to achieve a high enough number of citations to meaningfully contribute to one's h-index. There are also some concerns that the i10 index could be manipulated due to Google Scholar's methodology.[106],[107] In terms of its derivatives, the i10 index can easily be modified to standardize assessment across a number of authors and disciplines by assigning any arbitrary threshold number “n” of citations against which an author (or a group of authors, institutions, and journals) could be benchmarked.

Document level metrics

With the advent of new technologies, sophisticated publisher platforms, and widespread use of social media applications, an emerging set of metrics has allowed for measuring the actual usage of a publication, including the public or social engagement at the document-level (also referred to as article-level) unit of analysis. These document-level metrics track the usage of published knowledge by evaluating the presence of citations of scientific work in a broader repertoire of journal articles, books, published audio-visual materials, software packages, conference papers, data sets, figures, and websites. There are few limitations and many potential uses of such information, and novel metrics can be generated as long as specific types of data can be captured to determine how a work is read online, downloaded, shared among others, commented upon, recommended, viewed, and/or saved on various online reference or storage platforms.[109],[110]

Examples of some new (and potential) document-level metrics include:

  • Online downloads of a work [111]
  • Online views of a work [111],[112]
  • Bookmarks made using online reference managers (e.g., Mendeley,[113] Zotero [114])
  • Mentions of a work in social network sites [115]
  • Discussions of a work in blogs or other mass media platforms [112],[115]
  • Recommendations made using conduits for sharing of written/published work [112]
  • Comments/annotations for a work submitted to online repositories [111]
  • Commenting platforms such as PubMed Commons [116] or ResearchGate.[111]


These metrics can provide otherwise unappreciated evidence of nascent influence of a work, serve as complementary measures of impact to citations, and allow authors to highlight multiple examples of scholarly output, outside of the established realm of peer-reviewed journal articles. Document-level metrics are available from various sources and platforms such as publishers, software applications, and databases.

The Public Library of Science publishers,[117] the first to offer document-level metrics in 2009, provide the most highly developed publisher platform for document-level data. Other publishers and repositories that also offer document-level metrics include ScienceDirect,[118] PubMed Central,[119] and BioMed Central.[120] Platforms that offer data usage metrics and allow authors to share their work while providing a medium for post-publication scientific interactions include ResearchGate,[111] Academia.edu,[121] Google Scholar,[28],[33],[106] SlideShare,[122] and FigShare.[123]


  Document-Level Metrics Versus Traditional Metrics Top


As outlined above, the new document-level metrics, however transient, rudimentary, and/or anonymous in nature, may serve as an early indicator of the impact of a scientific work. Document-level metrics represent early-stage social or public engagement indicators of how (and by whom) a work is being shared, used, commented on, and disseminated further.[124],[125] Who is reading the new work? Who is tweeting about the new work? Where are they “tweeting” from? Is the work being discussed on a blog? By whom? Is the commenter a scientist or a policy-maker, or perhaps a layperson? Are users bookmarking the work in Mendeley or ResearchGate?[111],[126] Is the work the topic of an article in the press? Is a user viewing slides in SlideShare?[44] Is a user viewing figures in FigShare?[127] For newer publications, document-level metrics may be a powerful source of data to supplement traditional methods of assessment, especially if the publication has not yet garnered citations. However, metrics based on social attention or social/public engagement should be viewed with caution until their characteristics, scientific value, and quantitative behavior are better understood.[128]


  Academic Medicine: Promotion and Tenure Perspective Top


The majority of academic medical institutions are directing their clinicians toward the nontenure track. Reasons for this include decreasing availability of grant funding for research, the nonclinical (and thus “nonproductive”) time required for clinician scientists to be successful in research (coupled with the increasing emphasis on maintaining clinical workload to provide fiscal sustainability), as well as the newly recognized needs of increasingly diverse faculty, with novel forms of scholarship that the Internet and modern media capabilities bring to medical education and research.[129],[130],[131],[132],[133],[134],[135],[136] Practicing clinicians vary in their interests and contributions to the academic mission of an institution. Thus, while the aforementioned guidance regarding various metrics of academic promotion is helpful to an individual, not all the indices or productivity factors described in this article are universally required for promotion (or tenure). Although each metric may play a role, over the last decade academic medical centers have focused on providing their faculty members with a variety of “promotion track” options to accommodate the myriad of contributions outlined above.

For instance, at one midwestern academic medical center the nontenure pathway for faculty promotion offers three tracks: (a) the clinical scholar track, (b) the clinical educator track, and (c) the clinical excellence track.[137] None of the tracks require the acquisition of grants (although funding is viewed positively) when a candidate is being considered for promotion or tenure. In contrast to other tracks, the clinical excellence track does not require significant publication productivity. However, all tracks require an emerging “national reputation” for promotion to Associate Professor and a “national/international reputation” for promotion to the rank of Professor. National or international extramural recognition, or emerging extramural recognition, can be documented through a combination of invited lectures as a visiting professor or speaking engagements at major conferences (i.e., outside of the university's local or regional area); holding national or international office for a professional society; chairing national or international committees, being part of an editorial board, reviewing manuscripts for professional journals, etc. Candidates for promotion should also engage in active citizenship at their respective university or medical center (e.g., committee participation) and provide documentation of good teaching performance.

The clinical scholar track requires, in addition to the above, publications and participation in a major/national research project/trial, at least as a local principal investigator. Typically, participation in one trial is required for promotion to Associate Professor and participation in another one for promotion to Professor. There is usually a predetermined minimum number of publications required for promotion (set largely by each department under the guidance of the institutional leadership). Acceptable publications include manuscripts (original studies, case reports), letters, books, book chapters, and even Internet-based media contributions that are deemed relevant by the promotions committee. The number of manuscripts/academic offerings required for advancement from Assistant to Associate Professor is usually less than the quantity required to advance from Associate Professor to Professor. As highlighted above, any grants acquired will be a significant factor in support of the candidate's promotion.

The clinical educator track differs from the clinical scholar track in that the educator track usually requires fewer manuscripts or other academic “deliverables” but does require these to be in the field of education. In addition, experience as a residency or fellowship director helps the candidates' profiles, as do the requisite teaching evaluations, which must be excellent. Furthermore, teaching at conferences, organizing conferences, or educational events may also be helpful when approaching promotion. Participation in a major/national research project/trial, at least as a local principal investigator, is usually needed for promotion to Associate Professor and then to Professor. Such projects preferably involve the field of education.

As mentioned above, the “clinical excellence” track does not require publications; however, it does require documentation of clinical metrics that either enhance the reputation of the academic institution nationally/internationally or cause a distinct and positive change in the practice of medicine at the medical center itself, which will enhance the institution's reputation, patient flow, and/or income on a national level. The key component of documenting achievement is the provision of detailed clinical excellence metrics, usually in the form of tables, charts, and/or flow diagrams.

Regardless of the academic advancement track chosen by each faculty member, one important trend in the area of faculty promotion is the use of dynamic changes within various established indices of scholarly productivity. This paradigm shift provides more objective assessment of faculty progress over time, thus demonstrating continued and sustained effort (or lack thereof). For example, the so-called “h-delta” or the rate of increase of the “h-index” (or the “h-trajectory”) over time has been proposed to correlate with a researcher's potential for scientific impact.[138] Within this paradigm, researchers with annual changes of <1.0 in their “h-index” have been said to have “average” scientific performance, those with “h-delta” of 1.0–2.0 were categorized as “above average,” those with “h-delta” of 2.0–3.0 defined as “excellent,” and finally those with “h-delta” of >3.0 proposed to be “stellar” performers.[138] Similar paradigms can be easily extended to nonpublication achievements (e.g., resident education, clinical excellence, etc.).

In the final analysis, the promotion committee does not look at specific departmental or university requirements as final, “stand-alone” determinants. Instead, the promotion committee has the right (and indeed the duty) to make a determination for promotion using the candidate's entire dossier in a more “generalist” fashion. There is usually an element of an intangible, cognitive perception/determination that is left to the committee's discretion. However, objective extramural referee assessments and letters of support are required and considered to be of high importance. Extramural referees who do not support the candidate for promotion can be damaging to the individual's candidacy. While applicants on the scholar and educator tracks need support letters from nationally recognized referees faculty on the clinical excellence track may be allowed to list institutional, local, and regional referees. This, of course, leaves those in the scholar and educator tracks, at times, asking questions regarding whether the clinical excellence track is an easier path to promotion. In fact, it is often not the case. The burden of evidence to demonstrate that a particular practitioner's clinical contributions have enhanced the reputation of his/her institution nationally, brought more clinical revenue, improved operational efficiency, or resulted in superior patient outcomes, may be a difficult prospect to prove.

Following rigid guidelines is generally discouraged by university administrators; however, the requirement for extramural reputation is immutable and must be evident. After the committee makes a decision, there is usually a review of that decision at the level of the Vice Dean and then the Dean of the medical school (or an equivalent administrative position at a medical center without a medical school). Negative determinations for promotion candidates may still be overturned at these senior administrative levels. Almost no positive votes in support of candidate are overturned at the higher levels. This leaves many applicants for academic advancement with the impression that, when compared to top institutional administrators, the committees for promotion tend to be tougher on candidates. Consequently, a “checks and balances” system may be present for the promotion process that generally favors the candidates. In the end, academic medical centers are trying to respect all contributions to the tripartite academic mission of teaching, research, and service from the widely diverse faculty that propel such mission forward.[139]


  Conclusion Top


Traditional measures to quantify academic productivity based on “counts” (number of publications, number of citations, etc.) have numerous potential shortcomings. The digital revolution has enabled the creation of sophisticated databases and software tools that provide better methods of quantifying faculty research productivity and impact. Nearly impossible to obtain until recently, dynamic metrics of faculty performance add an additional layer of granularity to objective assessment of academic achievement. Increased competition for biomedical research funding, along with a growing emphasis by funding agencies and institutions on the demonstration of meaningful and transparent outcomes, has forced academic institutions to require more objective quantification of the impact of research on knowledge diffusion, synthesis into clinical applications, and public health outcomes. Therefore, it will become increasingly important to “go beyond the numbers” to evaluate and/or justify applications for funding or requests for promotion and/or tenure. Creating a narrative that provides proper contextual background and helps to better illustrate an academician's productivity and academic impact is far more meaningful than raw bibliometric data.

In today's competitive academic milieu, it is critical that faculty members proactively “curate” themselves. The term “curate” is based on the Latin word cura, loosely translated as “care.” Researchers need to establish their presence on author profile platforms, use contemporary strategies to enhance discoverability, consider multiple avenues of dissemination, reach beyond numbers to tell a story, and efficiently track research activities and output. Tailoring the academic productivity narrative for the intended purpose is one of the keys to meaningful communication with stakeholders and successful dissemination of academic output. Medical librarians offer substantial expertise in navigating the ever-expanding array of resources that exist to create this academic productivity narrative. While publication metrics can provide compelling documentation of faculty impact, no single metric is sufficient for measuring performance, quality, or influence by any individual author. Publication data constitute but a small portion of an author's academic and research story and do not provide a truly comprehensive picture of an academician's scientific reputation or influence. Other forms of scholarly activity regarded as meaningful and impactful include competitive grants, honors and recognition awards, patents and other forms of intellectual property, teaching activities, professional organization contributions, journal editorships, advisory board activities, mentoring efforts, and community engagement.

Financial support and sponsorship

Nil.

Conflicts of interest

There are no conflicts of interest.

 
  References Top

1.
Fuller CD, Choi M, Thomas CR Jr. Bibliometric analysis of radiation oncology departmental scholarly publication productivity at domestic residency training institutions. J Am Coll Radiol 2009;6:112-8.  Back to cited text no. 1
    
2.
Dietz JS, Bozeman B. Academic careers, patents, and productivity: Industry experience as scientific and technical human capital. Res Policy 2005;34:349-67.  Back to cited text no. 2
    
3.
Ramsden P. Describing and explaining research productivity. High Educ 1994;28:207-26.  Back to cited text no. 3
    
4.
McGrail MR, Rickard CM, Jones R. Publish or perish: A systematic review of interventions to increase academic publication rates. High Educ Res Dev 2006;25:19-35.  Back to cited text no. 4
    
5.
Fox MF, Mohapatra S. Social-organizational characteristics of work and publication productivity among academic scientists in doctoral-granting departments. J High Educ 2007;78:542-71.  Back to cited text no. 5
    
6.
Svider PF, Mauro KM, Sanghvi S, Setzen M, Baredes S, Eloy JA. Is NIH funding predictive of greater research productivity and impact among academic otolaryngologists? Laryngoscope 2013;123:118-22.  Back to cited text no. 6
    
7.
Prathap G. The 100 most prolific economists using the p-index. Scientometrics 2009;84:167-72.  Back to cited text no. 7
    
8.
Pandit JJ. Measuring academic productivity: Don't drop your 'h's!*. Anaesthesia 2011;66:861-4.  Back to cited text no. 8
    
9.
10.
Narin F. Evaluative Bibliometrics: The Use of Publication and Citation Analysis in the Evaluation of Scientific Activity. Washington, D.C: Computer Horizons; 1976.  Back to cited text no. 10
    
11.
Holden G, Rosenberg G, Barker K. Bibliometrics: A potential decision making aid in hiring, reappointment, tenure and promotion decisions. Soc Work Health Care 2005;41:67-92.  Back to cited text no. 11
    
12.
Rezek I, McDonald RJ, Kallmes DF. Is the h-index predictive of greater NIH funding success among academic radiologists? Acad Radiol 2011;18:1337-40.  Back to cited text no. 12
    
13.
Yang J, Vannier MW, Wang F, Deng Y, Ou F, Bennett J, et al. A bibliometric analysis of academic publication and NIH funding. J Informetr 2013;7:318-24.  Back to cited text no. 13
    
14.
Archambault É, Vignola-Gagne É, Côté G, Larivière V, Gingrasb Y. Benchmarking scientific output in the social sciences and humanities: The limits of existing databases. Scientometrics 2006;68:329-42.  Back to cited text no. 14
    
15.
van den Brink M, Fruytier B, Thunnissen M. Talent management in academia: Performance systems and HRM policies. Hum Resour Manage J 2013;23:180-95.  Back to cited text no. 15
    
16.
Butler L. Using a balanced approach to bibliometrics: Quantitative performance measures in the Australian Research Quality Framework. Ethics Sci Environ Polit 2008;8:83-92.  Back to cited text no. 16
    
17.
Agasisti T, Catalano G, Landoni P, Verganti R. Evaluating the performance of academic departments: An analysis of research-related output efficiency. Res Eval 2012;21:2-14.  Back to cited text no. 17
    
18.
Min LH, Abdullah A, Mohamed AR. Publish or perish: Evaluating and promoting scholarly output. Contemp Issues Educ Res 2013;6:143-6.  Back to cited text no. 18
    
19.
Leong M, Bazoune A, Wallace DR, Tang V, Seering WP. Towards a Tool for Characterizing the Progression of Academic Research. In ASME 2011 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers; 2011.  Back to cited text no. 19
    
20.
Small H, Sweeney E, Greenlee E. Clustering the science citation index using co-citations. II. Mapping science. Scientometrics 1985;8:321-40.  Back to cited text no. 20
    
21.
Garfield E. The application of citation indexing to journals management. Curr Contents 1994;33:3-5.  Back to cited text no. 21
    
22.
Thomson-Reuters. Web of Science. New York, NY: Thomson Reuters; 2010.  Back to cited text no. 22
    
23.
Price DJ. Networks of scientific papers. Science 1965;149:510-5.  Back to cited text no. 23
    
24.
Bakkalbasi N, Bauer K, Glover J, Wang L. Three options for citation tracking: Google Scholar, Scopus and Web of Science. Biomed Digit Libr 2006;3:7.  Back to cited text no. 24
    
25.
Swoger B. Reference eReviews; 01 March, 2013. Available from: http://www.reviews.libraryjournal.com/2013/03/reference/ereviews/reference-ereviews-march-1-2013-2/. [Last accessed on 2015 Nov 16].  Back to cited text no. 25
    
26.
CiteSeer. CiteSeer Search Engine; 2015. Available from: http://www.citeseerx.ist.psu.edu/about/site. [Last accessed on 2015 Nov 15].  Back to cited text no. 26
    
27.
RePEc. The RePEc Project and Scholarly Societies; 2015. Available from: http://www.repec.org/docs/RePEcSchol.html. [Last accessed on 2015 Nov 16].  Back to cited text no. 27
    
28.
Harzing AW. Google Scholar: A New Data Source for Citation Analysis; 2008. Available from: http://www.harzing.com/pop_gs.htm. [Last accessed on 2015 Nov 16].  Back to cited text no. 28
    
29.
Cauteruccio F, Giovambattista I. Scholar H-Index Calculator for Google Chrome and Firefox; 2015. Available from: https://www.mat.unical.it/ianni/wiki/ScholarHIndexCalculator. [Last accessed on 2015 Nov 20].  Back to cited text no. 29
    
30.
Indiana University. Scholarometer: A Social Tool to Facilitate Citation Analysis and Help Evaluate the Impact of an Author's Publications; 2015. Available from: http://www.scholarometer.indiana.edu/. [Last accessed on 2015 Nov 20].  Back to cited text no. 30
    
31.
Younger P, Boddy K. When is a search not a search? A comparison of searching the AMED complementary health database via EBSCOhost, OVID and DIALOG. Health Info Libr J 2009;26:126-35.  Back to cited text no. 31
    
32.
Yong-Qin T. Individualized service of the EBSCOhost full text database. J Libr Inf Sci Agric 2005;9:32.  Back to cited text no. 32
    
33.
Antell K, Strothmann M, Chen X, O'Kelly K. Cross-examining google scholar. Ref User Serv Q 2013;52:279-82.  Back to cited text no. 33
    
34.
Goodman D. Web of Science (2004 Version) and Scopus. The Charleston Advisor 2005;6:5-21.  Back to cited text no. 34
    
35.
de Jong-Hofman M. Comparison of selecting, abstracting and indexing by COMPENDEX, INSPEC and PASCAL and the impact of this on manual and automated retrieval of information. Online Rev 1981;5:25-36.  Back to cited text no. 35
    
36.
Castillo C, Donato D, Gionis A. Estimating number of citations using author reputation. In: String Processing and Information Retrieval. New York: Springer; 2007.  Back to cited text no. 36
    
37.
Bird SJ. Self-plagiarism and dual and redundant publications: What is the problem? Commentary on 'Seven ways to plagiarize: Handling real allegations of research misconduct'. Sci Eng Ethics 2002;8:543-4.  Back to cited text no. 37
    
38.
Neill US. Publish or perish, but at what cost? J Clin Invest 2008;118:2368.  Back to cited text no. 38
    
39.
Birks Y, Fairhurst C, Bloor K, Campbell M, Baird W, Torgerson D. Use of the h-index to measure the quality of the output of health services researchers. J Health Serv Res Policy 2014;19:102-9.  Back to cited text no. 39
    
40.
Fye WB. Medical authorship: Traditions, trends, and tribulations. Ann Intern Med 1990;113:317-25.  Back to cited text no. 40
    
41.
Nichani AS. Whose manuscript is it anyway? The 'Write' position and number of authors. J Indian Soc Periodontol 2013;17:283-4.  Back to cited text no. 41
[PUBMED]  Medknow Journal  
42.
Persson O, Glänzel W, Danell R. Inflationary bibliometric values: The role of scientific collaboration and the need for relative indicators in evaluative studies. Scientometrics 2004;60:421-32.  Back to cited text no. 42
    
43.
Green RG. Faculty rank, effort, and success: A study of publication in professional journals. J Soc Work Educ 1998;34:415-26.  Back to cited text no. 43
    
44.
Carpenter CR, Cone DC, Sarli CC. Using publication metrics to highlight academic productivity and research impact. Acad Emerg Med 2014;21:1160-72.  Back to cited text no. 44
    
45.
Batista PD, Campiteli MG, Kinouchi O. Is it possible to compare researchers with different scientific interests? Scientometrics 2006;68:179-89.  Back to cited text no. 45
    
46.
Redner S. How popular is your paper? An empirical study of the citation distribution. Eur Phys J B Condens Matter Complex Syst 1998;4:131-4.  Back to cited text no. 46
    
47.
Hirsch JE. An index to quantify an individual's scientific research output. Proc Natl Acad Sci U S A 2005;102:16569-72.  Back to cited text no. 47
    
48.
Cantín M, Muñoz M, Roa I. Comparison between impact factor, eigenfactor score, and SCImago journal rank indicator in anatomy and morphology journals. Int J Morphol 2015;33:1183-8.  Back to cited text no. 48
    
49.
van Raan AF, Moed H, Van Leeuwen T. Scoping Study on the Use of Bibliometric Analysis to Measure the Quality of Research in UK Higher Education Institutions. Report to HEFCE by the Centre for Science and Technology Studies, Leiden University; 2007.  Back to cited text no. 49
    
50.
Bonzi S, Snyder HW. Motivations for citation: A comparison of self citation and citation to others. Scientometrics 1991;21:245-54.  Back to cited text no. 50
    
51.
Posner RA. The Theory and Practice of Citations Analysis, with Special Reference to Law and Economics. University of Chicago Law School, John M. Olin Law and Economics Working Paper; 1999.  Back to cited text no. 51
    
52.
Garfield E. The history and meaning of the journal impact factor. JAMA 2006;295:90-3.  Back to cited text no. 52
    
53.
Kanthraj GR. Journal impact factor. Indian J Dermatol Venereol Leprol 2006;72:322-5.  Back to cited text no. 53
[PUBMED]  Medknow Journal  
54.
Kumar V, Upadhyay S, Medhi B. Impact of the impact factor in biomedical research: Its use and misuse. Singapore Med J 2009;50:752-5.  Back to cited text no. 54
    
55.
Garfield E. The Agony and the Ecstasy – The History and Meaning of the Journal Impact Factor; 2005. Available from: http://www.garfield.library.upenn.edu/papers/jifchicago2005.pdf. [Last accessed on 2015 Nov 17].  Back to cited text no. 55
    
56.
Zitt M, Small H. Modifying the journal impact factor by fractional citation weighting: The audience factor. J Am Soc Inf Sci Technol 2008;59:1856-60.  Back to cited text no. 56
    
57.
Alberts B. Impact factor distortions. Science 2013;340:787.  Back to cited text no. 57
    
58.
Cone DC, Gerson LW. Measuring the measurable: A commentary on impact factor. Acad Emerg Med 2012;19:1297-9.  Back to cited text no. 58
    
59.
Althouse BM, West JD, Bergstrom CT, Bergstrom T. Differences in impact factor across fields and over time. J Am Soc Inf Sci Technol 2009;60:27-34.  Back to cited text no. 59
    
60.
Seglen PO. Why the impact factor of journals should not be used for evaluating research. BMJ 1997;314:498-502.  Back to cited text no. 60
    
61.
Bollen J, Van de Sompel H, Hagberg A, Chute R. A principal component analysis of 39 scientific impact measures. PLoS One 2009;4:e6022.  Back to cited text no. 61
    
62.
Eyre-Walker A, Stoletzki N. The assessment of science: The relative merits of post-publication review, the impact factor, and the number of citations. PLoS Biol 2013;11:e1001675.  Back to cited text no. 62
    
63.
Yue W, Wilson CS, Rousseau R. The immediacy index and the journal impact factor: Two highly correlated derived measures. Can J Inf Libr Sci 2004;28:33-48.  Back to cited text no. 63
    
64.
Burton RE, Kebler R. The “half-life” of some scientific and technical literatures. Am Doc 1960;11:18-22.  Back to cited text no. 64
    
65.
Owlia P, Vasei M, Goliaei B, Nassiri I. Normalized impact factor (NIF): An adjusted method for calculating the citation rate of biomedical journals. J Biomed Inform 2011;44:216-20.  Back to cited text no. 65
    
66.
Gaster N, Gaster M. A critical assessment of the h-index. Bioessays 2012;34:830-2.  Back to cited text no. 66
    
67.
Sharma B, Boet S, Grantcharov T, Shin E, Barrowman NJ, Bould MD. The h-index outperforms other bibliometrics in the assessment of research performance in general surgery: A province-wide study. Surgery 2013;153:493-501.  Back to cited text no. 67
    
68.
Vanclay JK. On the robustness of the h-index. J Am Soc Inf Sci Technol 2007;58:1547-50.  Back to cited text no. 68
    
69.
Bornmann L, Daniel HD. What do we know about the h index? J Am Soc Inf Sci Technol 2007;58:1381-5.  Back to cited text no. 69
    
70.
Egghe L. Dynamic h-index: The Hirsch index in function of time. J Am Soc Inf Sci Technol 2007;58:452-4.  Back to cited text no. 70
    
71.
Oppenheim C. Using the h-index to rank influential British researchers in information science and librarianship. J Am Soc Inf Sci Technol 2007;58:297-301.  Back to cited text no. 71
    
72.
Roediger H. The h index in science: A new measure of scholarly contribution. Acad Obs 2006;19:1-6.  Back to cited text no. 72
    
73.
Glänzel W. On the opportunities and limitations of the H-index. Sci Focus 2006;1:383-391.  Back to cited text no. 73
    
74.
Kelly CD, Jennions MD. The h index and career assessment by numbers. Trends Ecol Evol 2006;21:167-70.  Back to cited text no. 74
    
75.
Van Raan AF. Comparison of the Hirsch-index with standard bibliometric indicators and with peer judgment for 147 chemistry research groups. Scientometrics 2006;67:491-502.  Back to cited text no. 75
    
76.
Zhang CT. Relationship of the h-index, g-index, and e-index. J Am Soc Inf Sci Technol 2010;61:625-8.  Back to cited text no. 76
    
77.
Egghe L. Theory and practice of the g-index. Scientometrics 2006;69:131-52.  Back to cited text no. 77
    
78.
Egghe L. An improvement of the h-index: The g-index. ISSI Newsl 2006;2:8-9.  Back to cited text no. 78
    
79.
Zhang CT. The e-index, complementing the h-index for excess citations. PLoS One 2009;4:e5429.  Back to cited text no. 79
    
80.
Schreiber M. An empirical investigation of the g-index for 26 physicists in comparison with the h-index, the A-index, and the R-index. J Am Soc Inf Sci Technol 2008;59:1513-22.  Back to cited text no. 80
    
81.
Tol RS. A rational, successive g-index applied to economics departments in Ireland. J Informet 2008;2:149-55.  Back to cited text no. 81
    
82.
Jin B. H-index: An evaluation indicator proposed by scientist. Sci Focus 2006;1:8-9.  Back to cited text no. 82
    
83.
Jin B, Liang L, Rousseau R, Egghe L. The R- and AR-indices: Complementing the h-index. Chin Sci Bull 2007;52:855-63.  Back to cited text no. 83
    
84.
Kosmulski M. MAXPROD – A new index for assessment of the scientific output of an individual, and a comparison. Cybermetrics 2007;11:1-5.  Back to cited text no. 84
    
85.
Wu Q. The w-Index: A Significant Improvement of the h-Index. arXiv Preprint arXiv: 0805.4650; 2008.  Back to cited text no. 85
    
86.
dos Santos Rubem AP, de Moura AL. Comparative analysis of some individual bibliometric indices when applied to groups of researchers. Scientometrics 2015;102:1019-35.  Back to cited text no. 86
    
87.
Alonso S, Cabrerizo FJ, Herrera-Viedma E, Herrera F. hg-index: A new index to characterize the scientific output of researchers based on the h-and g-indices. Scientometrics 2009;82:391-400.  Back to cited text no. 87
    
88.
Cabrerizo FJ, Alonso S, Herrera-Viedma E, Herrera F. q2-Index: Quantitative and qualitative evaluation based on the number and impact of papers in the Hirsch core. J Informetr 2010;4:23-8.  Back to cited text no. 88
    
89.
Schreiber M. A modification of the h-index: The h m-index accounts for multi-authored manuscripts. J Informetr 2008;2:211-6.  Back to cited text no. 89
    
90.
Bornmann L, Mutz R, Daniel HD. Are there better indices for evaluation purposes than the h index? A comparison of nine different variants of the h index using data from biomedicine. J Am Soc Inf Sci Technol 2008;59:830-7.  Back to cited text no. 90
    
91.
Cormode G, Ma Q, Muthukrishnan S, Thompson B. Socializing the h-index. J Informetr 2013;7:718-21.  Back to cited text no. 91
    
92.
Rousseau R, Ye FY. A proposal for a dynamic h-type index. J Am Soc Inf Sci Technol 2008;59:1853-5.  Back to cited text no. 92
    
93.
Ye F, Rousseau R. Probing the h-core: An investigation of the tail-core ratio for rank distributions. Scientometrics 2009;84:431-9.  Back to cited text no. 93
    
94.
Bergstrom CT, West JD, Wiseman MA. The Eigenfactor metrics. J Neurosci 2008;28:11433-4.  Back to cited text no. 94
    
95.
Fersht A. The most influential journals: Impact factor and Eigenfactor. Proc Natl Acad Sci U S A 2009;106:6883-4.  Back to cited text no. 95
    
96.
Crisp MG. Eigenfactor. Collect Manage 2008;34:53-6.  Back to cited text no. 96
    
97.
Bergstrom CT, West JD. Assessing citations with the Eigenfactor metrics. Neurology 2008;71:1850-1.  Back to cited text no. 97
    
98.
Yu P, Van de Sompel H. Networks of scientific papers. Science 1965;169:510-5.  Back to cited text no. 98
    
99.
Rizkallah J, Sin DD. Integrative approach to quality assessment of medical journals using impact factor, eigenfactor, and article influence scores. PLoS One 2010;5:e10204.  Back to cited text no. 99
    
100.
Garfield, E. Use of Journal Citation Reports and Journal Performance Indicators in measuring short and long term journal impact. Croatian Medical Journal 2000;41:368-374.  Back to cited text no. 100
    
101.
Leydesdorff L. Can scientific journals be classified in terms of aggregated journal-journal citation relations using the journal citation reports? J Am Soc Inf Sci Technol 2006;57:601-3.  Back to cited text no. 101
    
102.
University of Washington. EigenFactor; 2015. Available from: http://www.eigenfactor.org/. [Last accessed on 2015 Dec 05].  Back to cited text no. 102
    
103.
Bergstrom T. Journal Cost-Effectiveness; 2015. Available from: http://www.journalprices.com/. [Last accessed on 2015 Dec 05].  Back to cited text no. 103
    
104.
Ascaso FJ. Impact factor, eigenfactor and article influence. Arch Soc Esp Oftalmol 2011;86:1-2.  Back to cited text no. 104
    
105.
EigenFactor. Eigenfactor (TM) Score and Article Influence (TM) Score: Detailed Methods; 2008. Available from: http://www.eigenfactor.org/methods.pdf. [Last accessed on 2015 Dec 05].  Back to cited text no. 105
    
106.
Delgado López-Cózar E, Robinson-García N, Torres-Salinas D. The Google Scholar experiment: How to index false papers and manipulate bibliometric indicators. J Assoc Inf Sci Technol 2014;65:446-54.  Back to cited text no. 106
    
107.
Lopez-Cozar ED, Robinson-García N, Torres-Salinas D. Manipulating Google Scholar Citations and Google Scholar Metrics: Simple, Easy and Tempting. arXiv: 1212.0638; 2012.  Back to cited text no. 107
    
108.
Jacsó P. Google Scholar author citation tracker: Is it too little, too late? Online Inf Rev 2012;36:126-41.  Back to cited text no. 108
    
109.
Lin J, Fenner M. Altmetrics in evolution: Defining and redefining the ontology of article-level metrics. Inf Stand Q 2013;25:20.  Back to cited text no. 109
    
110.
Boyack KW, Klavans R. Co-citation analysis, bibliographic coupling, and direct citation: Which citation approach represents the research front most accurately? J Am Soc Inf Sci Technol 2010;61:2389-404.  Back to cited text no. 110
    
111.
ResearchGate. ResearchGate; 2015. Available from: http://www.researchgate.net/. [Last accessed on 2015 Nov 15].  Back to cited text no. 111
    
112.
Bik HM, Goldstein MC. An introduction to social media for scientists. PLoS Biol 2013;11:e1001535.  Back to cited text no. 112
    
113.
Mendeley. Mendeley; 2015. Available from: https://www.mendeley.com/. [Last accessed on 2015 Dec 15].  Back to cited text no. 113
    
114.
Zotero. Zotero; 2015. Available from: https://www.zotero.org/. [Last accessed on 2015 Dec 05].  Back to cited text no. 114
    
115.
Bahner DP, Adkins E, Patel N, Donley C, Nagel R, Kman NE. How we use social media to supplement a novel curriculum in medical education. Med Teach 2012;34:439-44.  Back to cited text no. 115
    
116.
NLM. PubMed Commons; 2015. Available from: http://www.ncbi.nlm.nih.gov/pubmedcommons/. [Last accessed on 2015 Dec 05].  Back to cited text no. 116
    
117.
PLOS. PLOS: Open for Discovery; 2015. Available from: https://www.plos.org/. [Last accessed on 2015 Dec 05].  Back to cited text no. 117
    
118.
Elsevier. ScienceDirect; 2015. Available from: http://www.sciencedirect.com/. [Last accessed on 2015 Dec 05].  Back to cited text no. 118
    
119.
NLM. PubMed Central; 2015. Available from: http://www.ncbi.nlm.nih.gov/pmc/. [Last accessed on 2015 Dec 05].  Back to cited text no. 119
    
120.
BMC. BioMed Central; 2015. Available from: http://www.biomedcentral.com/. [Last accessed on 2015 Dec 05].  Back to cited text no. 120
    
121.
Academia.edu. Academia; 2015. Available from: https://www.academia.edu/. [Last accessed on 2015 Dec 05].  Back to cited text no. 121
    
122.
SlideShare. SlideShare; 2015. Available from: http://www.slideshare.net/. [Last accessed on 2015 Nov 15].  Back to cited text no. 122
    
123.
Figshare.com. FigShare; 2015. Available from: http://www.figshare.com/. [Last accessed on 2015 Dec 05].  Back to cited text no. 123
    
124.
Haustein S, Peters I, Sugimoto CR, Thelwall M, Larivière V. Tweeting biomedicine: An analysis of tweets and citations in the biomedical literature. J Assoc Inf Sci Technol 2014;65:656-69.  Back to cited text no. 124
    
125.
Klavans R, Boyack KW. Using global mapping to create more accurate document-level maps of research fields. J Am Soc Inf Sci Technol 2011;62:1-18.  Back to cited text no. 125
    
126.
Konkiel S, Piwowar H, Priem J. The imperative for open altmetrics. J Electron Publ 2014;17:1.  Back to cited text no. 126
    
127.
Hahnel M. Exclusive: Figshare a new open data project that wants to change the future of scholarly publishing. Impact Soc Sci Blog. 2012 Jan 18.  Back to cited text no. 127
    
128.
Altmetric. The Altmetric Bookmarklet; 2015. Available from: http://www.altmetric.com/bookmarklet.php. [Last accessed on 2015 Dec 05].  Back to cited text no. 128
    
129.
Bunton SA, Mallon WT. The continued evolution of faculty appointment and tenure policies at U.S. medical schools. Acad Med 2007;82:281-9.  Back to cited text no. 129
    
130.
Center for American Progress. Erosion of Funding for the National Institutes of Health Threatens U S Leadership in Biomedical Research; 2014. Available from: https://www.americanprogress.org/issues/economy/report/2014/03/25/86369/erosion-of-funding- for-the-national-institutes- of-health-threatens-u-s-leadership- in-biomedical-research/. [Last accessed on 2016 Mar 03].  Back to cited text no. 130
    
131.
Kubiak NT, Guidot DM, Trimm RF, Kamen DL, Roman J. Recruitment and retention in academic medicine – What junior faculty and trainees want department chairs to know. Am J Med Sci 2012;344:24-7.  Back to cited text no. 131
    
132.
Bickel J. What can be done to improve the retention of clinical faculty? J Womens Health (Larchmt) 2012;21:1028-30.  Back to cited text no. 132
    
133.
Krupat E, Pololi L, Schnell ER, Kern DE. Changing the culture of academic medicine: The C-Change learning action network and its impact at participating medical schools. Acad Med 2013;88:1252-8.  Back to cited text no. 133
    
134.
Villablanca AC, Beckett L, Nettiksimmons J, Howell LP. Improving knowledge, awareness, and use of flexible career policies through an accelerator intervention at the University of California, Davis, School of Medicine. Acad Med 2013;88:771-7.  Back to cited text no. 134
    
135.
Anderson MG, D'Alessandro D, Quelle D, Axelson R, Geist LJ, Black DW. Recognizing diverse forms of scholarship in the modern medical college. Int J Med Educ 2013;4:120.  Back to cited text no. 135
    
136.
Pickering CR, Bast RC Jr., Keyomarsi K. How will we recruit, train, and retain physicians and scientists to conduct translational cancer research? Cancer 2015;121:806-16.  Back to cited text no. 136
    
137.
The Ohio State University. Promotion and Tenure; 2016. Available from: http://www.medicine.osu.edu/faculty/promotionandtenure/pages/index.aspx. [Last accessed on 2016 Mar 03].  Back to cited text no. 137
    
138.
Bateman A. Why I love the H-Index; 2012. Available from: http://www.blogs.plos.org/biologue/2012/10/19/why-i-love-the-h-index/. [Last accessed on 2016 Mar 07].  Back to cited text no. 138
    
139.
Evans DC, Firstenberg MS, Galwankar SC, Moffatt-Bruce SD, Nanda S, O'Mara MS, et al. International journal of academic medicine: A unified global voice for academic medical community. Int J Acad Med 2015;1:1.  Back to cited text no. 139
    


    Figures

  [Figure 1], [Figure 2], [Figure 3], [Figure 4]
 
 
    Tables

  [Table 1]



 

Top
 
 
  Search
 
Similar in PUBMED
   Search Pubmed for
   Search in Google Scholar for
 Related articles
Access Statistics
Email Alert *
Add to My List *
* Registration required (free)

 
  In this article
Abstract
Introduction
Citation Indexin...
Publication-Base...
Publication Metr...
Document-Level M...
Academic Medicin...
Conclusion
References
Article Figures
Article Tables

 Article Access Statistics
    Viewed5136    
    Printed170    
    Emailed0    
    PDF Downloaded91    
    Comments [Add]    

Recommend this journal


[TAG2]
[TAG3]
[TAG4]