|SYMPOSIUM: RESEARCH AND ACADEMIA
|Year : 2016 | Volume
| Issue : 2 | Page : 187-202
Competing for impact and prestige: Deciphering the “alphabet soup” of academic publications and faculty productivity metrics
Ashish Ranjan1, Rajan Kumar1, Archana Sinha2, Sudip Nanda2, Kathleen A Dave3, Maria D Collette4, Thomas J Papadimos5, Stanislaw P Stawicki1
1 Department of Research and Innovation, St. Luke's University Health Network, Bethlehem, Pennsylvania, USA
2 Heart and Vascular Center, St. Luke's University Health Network, Bethlehem, Pennsylvania, USA
3 Temple University School of Medicine, St. Luke's University Hospital Campus, Bethlehem, Pennsylvania, USA
4 W. L. Estes Memorial Library, St. Luke's University Health Network, Bethlehem, Pennsylvania, USA
5 Department of Anesthesiology, University of Toledo, Toledo, Ohio, USA
|Date of Submission||02-Jun-2016|
|Date of Acceptance||25-Jun-2016|
|Date of Web Publication||28-Dec-2016|
Stanislaw P Stawicki
Department of Research and Innovation, St. Luke's University Health Network, EW2 – Research Administration, 801 Ostrum Street, Bethlehem, Pennsylvania 18015
Source of Support: None, Conflict of Interest: None
Accurate quantification of scholarly productivity continues to pose a significant challenge to academic medical institutions seeking to standardize faculty performance metrics. Numerous approaches have been described in this domain, from subjective measures employed in the past to rapidly evolving objective assessments of today. Metrics based on publication characteristics include a variety of easily categorized, normalized, referenced, and quantifiable data points. In general, such measures can be broadly grouped as being author-, manuscript-, and publication/journal-specific. Commonly employed units of measurement are derived from the number of publications and/or citations, in various combinations and derivations. In aggregate, these metrics are utilized to more objectively assess academic productivity, mainly for the purpose of determining faculty promotion and tenure potential; evaluating grant application/renewal competitiveness; journal/publication, and institutional benchmarking; faculty recruitment, retention, and placement; as well as various departmental and institutional performance assessments. This article provides an overview of different measures of academic productivity and scientific impact, focusing on bibliometric data utilization, including advantages and disadvantages of each respective methodological approach.
The following core competencies are addressed in this article: Interpersonal skills and communication, practice-based learning and improvement, systems-based practice.
Keywords: Academic productivity metrics, bibliometric indices, impact factor, promotion and tenure
|How to cite this article:|
Ranjan A, Kumar R, Sinha A, Nanda S, Dave KA, Collette MD, Papadimos TJ, Stawicki SP. Competing for impact and prestige: Deciphering the “alphabet soup” of academic publications and faculty productivity metrics. Int J Acad Med 2016;2:187-202
|How to cite this URL:|
Ranjan A, Kumar R, Sinha A, Nanda S, Dave KA, Collette MD, Papadimos TJ, Stawicki SP. Competing for impact and prestige: Deciphering the “alphabet soup” of academic publications and faculty productivity metrics. Int J Acad Med [serial online] 2016 [cited 2022 Aug 8];2:187-202. Available from: https://www.ijam-web.org/text.asp?2016/2/2/187/196875
| Introduction|| |
The ability to quantify research productivity is becoming increasingly important in the highly competitive and ever more complex environment of modern academic medicine.,,, Although the number of categories and items considered to be reflective of academic productivity continues to expand [Table 1], two general types of academic work can be considered to collectively constitute the “gold standard” for performance evaluation, promotion, and tenure – extramural funding and publications.,,,, Due to the rapidly changing landscape of bibliometric analysis, significant confusion persists regarding the “alphabet soup” of competing (and often overlapping) indices and assessment tools.,,
|Table 1: List of selected article-, journal-, and author-specific metrics mentioned throughout the manuscript. Many of these metrics form the basis for promotion and tenure determinations. Nontraditional metrics (e.g., teaching, social media, etc.) are not shown|
Click here to view
Broadly speaking, the term “bibliometrics” refers to the use of quantitative approaches to measure and track publication data using various document-, author-, or source-level (e.g., journal-level) elements. Within this collected body of data, one is then able to define specific characteristics, patterns, and relationships that help demonstrate an investigator's or research team's productivity, contribution quality, and/or scientific impact. Publication metrics can be used for a variety of purposes, including faculty tenure and promotion determinations, grant applications and renewals,, productivity benchmarking, talent recruitment and retention efforts, as well as different administrative purposes (e.g., departmental or institutional performance reports)., Despite their widespread use, there is significant confusion about many of these metrics, especially when one considers the gravity of some of the strategic decisions (e.g., talent management and resource allocation) made based on quantitative bibliometric data.,
The overarching goal of this manuscript is to provide a high-level overview of established and emerging bibliometric indices focusing on their principal uses, advantages, disadvantages, and alternatives. Familiarity with existing publication metrics is essential to all stakeholders in academic medicine, especially when measuring and reporting scholarly productivity and scientific impact of academic clinicians, departments, institutions, or research groups. Thorough understanding of key bibliometric indices will be increasingly important for individual faculty members, department leaders, review committees, funding agencies, and journal editors due to greater awareness of the relationship between academic productivity, scientific impact, and the subsequent diffusion/synthesis of knowledge into clinical applications.
| Citation Indexing Services|| |
This section will outline similarities and differences between major providers of bibliographic citation indices, with subsequent discussion mentioning some of the less prominent actors in this domain. Since none of the citation databases are truly “all inclusive,” one should utilize multiple databases to achieve optimal results. In general, scientific citation indices can be a powerful source of information about author/publication impact and help establish the foundation of more advanced analyses, such as author mapping, expertise clustering, and focused impact of specific work. Due to the multitude and complexity of existing bibliometric indices, we will avoid using abbreviations or acronyms, except when referring to major citation/indexing services that are commonly known by such unique and specific acronyms.
The Institute for Scientific Information
Traditionally, citation indexing has been dominated by the Institute for Scientific Information (ISI), which is now part of the Thomson Reuters media conglomerate. The ISI publishes its citation indices in various media formats, most commonly available on the Internet under the name “Web of Science.” Web of Science is a subscription-based service that provides access to the following databases: Science Citation Index, Social Sciences Citation Index, Arts and Humanities Citation Index, Index Chemicus, Current Chemical Reactions; Conference Proceedings Citation Index: Science; as well as Conference Proceedings Citation Index: Social Science and Humanities. Web of Science features both complex and focused search options, the ability to filter and refine queries, and the option to analyze the results.
This subscription-based indexing service, available online, is operated by the global academic publishing house, Elsevier. Scopus is one of the largest abstract and citation databases of peer-reviewed literature and web resources. It also includes a variety of “smart tools” to track, analyze, and visualize research content and search results.,
This service can be considered to be both “citation engine” and “digital library.” CiteSeer is based on the SmealSearch (now BizSeer) engine  and provides citation data, citation graph analysis, and document retrieval capabilities. Research Papers in Economics  maintains databases in economics and related fields.
This increasingly influential service in the bibliometrics space  provides citation and search capabilities for scholarly literature across virtually all indexed disciplines and sources. It is based on a freely-accessible web search engine that contains an ever-accreting number of citations, fulltexts, nontraditional sources (e.g., government documents, nonindexed journals, books, dissertations, end-user content), and other knowledge repositories (e.g., open archives initiative resources). Google Scholar is known for its speed and ability to scan through actual manuscript content in near real-time. This increases the relevance of search results and provides the user with context-specific output. In addition to basic search features, registered users can also set up individual profiles so that continuous tracking of author-specific metrics (e.g., h-index, i10 index, citations per manuscript, as well as citations per year) become available. Such profiles can then be made public and shared across existing research “virtual networks” (see subsequent sections of the manuscript). Finally, special add-ons exist for various web browsers that help facilitate customized search capabilities, including real-time determinations of various citation-based indices (e.g., h-index, e-index, g-index, etc.).,
This platform is among one of the most widely used premium reference database services., Of note, EBSCO host is an agglomeration of various repositories, only a few of which offer formal citation analysis. Compendex (Engineering Index, COMPuterized ENgineering inDEX) is one of the most comprehensive engineering literature databases.,,
| Publication-Based Metrics of Academic Productivity|| |
After outlining some of the fundamental tools, platforms, and principles of bibliometrics and indexing, we will turn our attention to the discussion of specific publication-based academic productivity measures. In aggregate, these metrics are largely based on outcome variables derived from data provided by indexing services outlined in previous sections. Subsequent discussion will begin with well-established, simpler measures and will gradually progress toward more advanced and complex topics in this area.
Number of research papers
A very simple and easily quantifiable measure of research productivity (and impact) can be the number of research papers published by an author. In general, the more prolific the author, the greater their scientific impact. However, this approach has a number of important limitations. Academic clinicians may become tempted to bolster their apparent research output by resorting to double or redundant publication, questionably meritorious submissions, self-plagiarism, and reporting based on the so-called “minimal publishable unit.”, Some authors are also willing to sacrifice “publication quality” for “publication quantity.” This, in turn, may lead to lower than expected impact. Collectively, the above phenomena increase the complexity of the peer review process while reducing the per-manuscript “information yield” for busy bedside clinicians who are often overwhelmed with the totality of available information. Consequently, a simple tally of the number of publications authored or coauthored by a single academic clinician is arguably a poor method to assess true research productivity and/or impact. One additional metric related to peer-reviewed journal articles is the number of original research articles versus review articles (or other publication types – e.g., case reports, communications, editorials, and letters). Original manuscripts represent the primary sources of knowledge that are based on research whereas review articles (and other “derivative” publications) serve as secondary sources or “processed” knowledge on a specific subject.
Authorship order or author “status”
It is an accepted practice in academia that individuals listed first or last on the author by-line are recognized as having contributed the bulk of the work toward project completion and manuscript publication., This measure may be most relevant when assessing manuscripts with limited number of authors, mainly because authorship effort attribution is relatively straight forward in such cases. However, this method is less likely to be a true expression of academic productivity when evaluating multicenter or multiauthor publications. In such cases, it is most optimal to utilize the estimates of per-author percentage effort to determine the actual level of contribution. Along the same lines, some institutions require that authors state their estimated “percent effort” pertaining to each publication submitted during promotion and tenure considerations. Among other metrics within this general category of productivity assessments is the percentage of manuscripts accepted (e.g., the relationship between total number of manuscript submissions and acceptances for a specific author). Although primarily reflective of the quality of work submitted, the latter measure may also be skewed by factors such as the author's reputation and the overall impact of journals to which the work was submitted.
Regarding a more broad assessment of an author's impact, one can look at the number of peer-reviewed journals in which an academic clinician has published. Documented record of contributions to journals in various specialty areas (and impact) are indicative of thematic diversity, collaborative efforts, and overall depth of scientific expertise. Such diversification can be used to create a compelling narrative of interdisciplinary or translational research efforts. Conversely, an author who publishes exclusively or nearly exclusively in a small number of subspecialty journals may be seen to have created the “narrow and deep niche” that academia and traditional funding sources typically covet.
Total and per-article citations
An author's total number of citations is a general, but by no means absolute, guide to their academic productivity. When looking at this particular metric, one must remember that both scientific impact and the number of career citations may vary significantly across disciplines, highlighting the need for data normalization. However, it is reasonable to say that the number of citations attributed to a particular article can be a proxy for the work's overall quality., Some of the most widely used bibliometric indices such as impact factor and the h-index have been built around measuring the number of citations.,, However, concerns have been raised about the validity and utility of such citation-based systems. First, older publications have more time to accrue citations than newer manuscripts, resulting in potential for bias if this is not normalized or otherwise corrected. Second, early reports of scientific findings, which at the time of initial publication may be at odds with the broadly held beliefs or expectations of the scientific community, are often not cited until some years have passed. This phenomenon is known as the “Mendel effect.” Third, manuscript impact can be subject to some degree of manipulation by deliberate self-citation by the primary author or reciprocal citations by colleagues (as opposed to true, unbiased scientific impact).,
| Publication Metrics Based on Impact|| |
Journal impact factor score
An important and predictive measure of research impact is the journal impact factor (JIF) of the publication in which the manuscript appears., The JIF is computed for a specific journal/publication by ascertaining the average number of citations per article per year. As such, the JIF can be used as an indication of the relative influence of a journal within its field, where journals with higher impact factors are deemed to be more influential or prestigious than those with lower impact factors. Simple JIF calculations are generally compiled on annual basis., However, different iterations of the impact factor exist, with different time horizons involved (e.g., 3-year, 5-year, etc.).
The JIF or Journal Citation Reports (JCRs) score is derived by dividing all journal-specific citations in the JCR during a given year by the total number of articles published by that journal in the two previous years. For example, a JCR impact factor score of 2.0 means that, on average, articles published in a specific journal 1 or 2 years ago have been cited twice. One major flaw in this paradigm is that the JCR impact factor score does not provide significant insight regarding a specific manuscript or its author(s). Rather, it is a unit of analysis based on the journal as a whole and only a secondary reflection of author-based performance or impact (e.g., one could speculate that higher impact factor journal is generally associated with higher quality of both “authorship and science”). In an era of both instantaneous and free access to scientific information, the true value of “impact” is determined by researchers within the “free and open market” of research ideas. Of note, the impact factor is also limited to journals indexed by the Thomson Reuters Web of Science database (or approximately 15% of all existing journal titles). Finally, the JCR impact factor score encompasses citations only from the previous 2 years whereas the full impact of an individual publication is often measured over decades, with some articles only “noticed” by the scientific community after specific conditions for the emergence of such interest materialize.
Another limitation to this methodology is that both journals and authors can manipulate the JCR impact factor by intentional self-citations and by encouraging peer reviewers to suggest that authors consider additional source citations from that same journal during a specified time frame. In addition, a journal can adopt editorial policies to increase its impact factor without necessarily increasing the quality of the science it publishes. For example, certain journals may publish a larger percentage of review articles which generally are cited more than original research reports or cases. Thus, review articles can raise the impact factor of the journal and “review journals” will, therefore, tend to have relatively higher impact factors in their respective fields. Finally, some journal editors set their submissions policy to “by invitation only”, with strong preference toward senior scientists publishing “high impact” or “likely to be cited” papers that increase the JIF.,,
Indices related to impact factor
The immediacy index measures the average number of times an article, published in a particular journal during a specific year, is cited over the course of the same year. Cited half-life measures the number of years, going back from the current year, that account for half of the total citations received by the cited journal in the current year. For example, if a journal's cited half-life in 2005 is 5, it means that citations from 2001 to 2005 account for half of all the citations from that journal and the other half of measured citations precede 2001. Aggregate impact factor for a subject category is calculated by taking into account the number of citations to all journals in the subject category and the number of articles from all the journals in the category. This particular measure is important to consider when differences between scientific specialties and subspecialties need to be factored into promotion and tenure deliberations (e.g., when one discipline tends to have higher/lower aggregate impact than other disciplines). Key concepts related to the immediacy index, the impact factor, and cited half-life are demonstrated graphically in [Figure 1].
|Figure 1: Graphical representation of the concepts of (a) immediacy index, impact factor, and (b) cited half-life. In this example, the journal has a cited half-life of 8 years|
Click here to view
The h-index, sometimes called the Hirsch index or “Hirsch number,” was first developed by Hirsch  as a method to quantify the impact and quality of the published work of a particular scientist or scholar. In this paradigm, a scientist has h-index of “h” if “h” of his/her “n” papers have at least “h” citations each, and the other (“n – h”) papers have not more than “h” citations each [Figure 2]. In other words, an author with an index of “h” has published “h” manuscripts, each of which has been cited in other papers at least “h” times. In a practical example, if an author's h-index is 15, the academician has 15 papers that were cited 15 times or more. If the h-index is 20, one must have at least 20 papers, each cited 20 times or more.
|Figure 2: Graphical representation of the “h-index,” (a) the graph on the left shows the academic record for an author with only one publication and one associated citation, (b) the graph on the right shows the academic record for an author with at least three publications, each of which having been cited at least 3 times|
Click here to view
Mathematical formula for the h-index is as follows:
where ch (j = 1, 2,…, h,…, n) denotes the citation records of the j th publication in a list ranked in nonincreasing order of citations. Both ch and h are natural numbers. Publications cited at least “h” times are said to be in the “h-core.” Thus, the h-index is a single-number indicator for evaluating the scientific achievement of a given researcher., It “ignores” the long-tails of the publication (quantity) and citation (quality) distribution but focuses on where the numbers of papers and citations intersect, which signifies the “middle part” concept of the Zipf's law. It assesses a scientist's performance based on the blended approach that measures both quantity and quality of his/her papers taken together.
The h-index integrates the evaluation of productivity (the number of a scientist's total publications) and impact (the influence of the papers on the scientist's peers) in a single, easy-to-compute indicator. Given the increased citation data availability and accessibility, information needed to calculate the h-index is becoming progressively easier to obtain. The index itself is relatively insensitive to both infrequently and highly cited papers, which may somewhat distort the assessment of the overall author productivity and impact relative to other approaches discussed in this manuscript. The h-index is notably free from influences of document types when compared to counting “total publications” or “total citations.”,,,, Among its disadvantages, the h-index may not be an appropriate indicator for comparing performance across various fields of discipline (but it may be useful in standardizing expected academic performance by discipline). This is because disciplines that are “in demand” (e.g., hematology-oncology) tend to generate both more publications and citations than disciplines that are more “insular” in character (e.g., pediatric metabolic disorders). The h-index may also under- or over-estimate a researcher's achievement in terms of coauthorship because scientists with potentially varied levels of achievement may have the same h-index value. Consequently, relying on h-index without a broader context is not recommended when determining academic promotion and tenure. Finally, using data obtained directly from Web of Science in isolation might present another problem when calculating h-index, considering that it essentially “misses” about 80–85% of publications that might potentially be citing a particular source., 69, ,, This particular limitation has been largely remedied by Google Scholar (and other similar open or free access knowledge repositories), with a global search capability enabling near real-time citation data reporting that is inclusive of the entire searchable Internet “publication universe.“
To overcome some of the disadvantages inherent to the h-index, modifications and adjuncts such as the g-index and e-index have been proposed. It has been previously pointed out that the h-index is “insensitive” to the “tail” of papers with citations that do not reach the “h” value, yet may cumulatively account for a significant proportion of the author's scientific impact. Consequently, Egghe modified the index by replacing the idea of calculating the number of citations received by each article with the concept of calculating the total accumulated citations of the top “g” articles in the so-called g-index., The e-index or another “supplemental” index is a measure of impact of manuscripts that have not yet reached a particular author's “h-index threshold” as outlined above. These two principal “supplementary” indices are further discussed below.
The g-index  is defined as a scientist's highest natural number of publications (e.g., “g”) that together received g 2 or more citations. By examining this methodology more closely, it becomes evident that g ≥ h. Because the g-index essentially expands the “h-core,” it can better differentiate within a more varied and more inclusive citation patterns. Alternatively, the g-index can be interpreted as a scientist's highest natural number of publications (e.g., “g”) that have been cited “g or more times” on average. Thus, it places more weight on highly cited publications. Formally, let cj (j = 1, 2,…) denote the citation count of the j th publication in a list ranked in nonincreasing order of citations, then the g-index will be derived as shown:
The g-index, therefore, is defined as follows: a scientist has an index number “g” when his top performing “g” papers were cited at least “g 2” times [Figure 3]. As such, the g-index is capable of highlighting papers that have the highest overall impact.
A higher g-index indirectly reflects that the author has “more and better papers.” Egghe points out that the g-index value will always be higher than the h-index value and lower than the total publication number. The g-index compensates for one major shortcoming of the h-index, or the fact that the latter provides a rather insensitive assessment of academic productivity for authors with few and/or low-cited (or noncited) publications. Thus, the g-index provides better granularity, with which one can more effectively differentiate academic performance of individual authors. Further, the g-index gives relatively more weight to one or several highly cited papers, thus better highlighting the cumulative impact of a specific author. However, similar to the h-index, the g-index values are also integers and many authors may be “classified” and “stratified” under similar g-index values without the guarantee of complete fairness, leading to similar dilemmas that were discussed under the h-index earlier. Because of the latter limitation, the “g-index” is not the best indicator when evaluating and comparing smaller cohorts of authors. Now that we have discussed the most established citation indices (e.g., the h-index and the g-index), let us discuss some of the less commonly used alternatives.
The a-index  aims to achieve the same goal as the g-index while correcting for the fact that the h-index does not take the exact number of publication citations in the “h-core” into account. The a-index is defined as the average number of citations received by publications included in the “h-core.” As can be seen from the mathematical relationship, h ≤ a:
Using the a-index avoids the problem of integer-based scoring, thus allowing differentiation of academic productivity at an even greater level of granularity. Due to its derivation, the a-index value is usually higher than the g-index and generally much higher than the h-index. Furthermore, the a-index seems more capable of differentiating the relative performance of a group of scientists or institutions.
The e-index represents “excess citations” attributable to an author's publications within the h-core. It is a useful estimate of the scientific impact of authors who are in the beginning stages of their careers and have not yet been cited sufficiently to generate a noticeable increase in their h-index or g-index. Graphical representation of the e-index is shown in [Figure 4], and its mathematical formula is as follows:
|Figure 4: Excess citations attributable to an author (above and beyond those represented by the h-index) can be estimated using the e-index, derived from the area “above” the “h-index area” on the graph shown. This measure of scholarly productivity is useful in estimating scientific contributions of those authors who have not yet achieved sufficient per-publication citations to meaningfully contribute to their h-index. Prolific authors in the early career stages tend to have higher e-index values|
Click here to view
The h-2-index  is an h-index variant that is biased toward more highly cited publications. It is defined as the highest natural number “j” where an author's “j” most commonly cited publications each received at least “j 2” citations. Mathematical representation of the h-2-index is:
The h-g-index, as the name suggests, represents a blend of h-index and g-index. It is designed to optimize advantages associated with both approaches while negating some of the potential disadvantages. The h-g-index attributable to a particular researcher's academic productivity is derived as the geometric average of that researcher's h-index and g-index, as follows:
When examining component indices, h-index ≤ h-g-index ≤ g-index. Furthermore, (h-g-index – h-index) ≤ (g index – h-g-index) (e.g., h-g-index results are mathematically closer to the h-index than to the g-index), suggesting that while the h-g-index considers citations attributable to the highly cited items (against which the h-index is relatively robust), it also diminishes the relative contribution of a single (or few) very highly cited item(s) - a known shortcoming of the g-index.
The maxprod index
The maxprod index  can be described as the greatest value obtained by multiplying the rank “j” by its corresponding citation count (e.g., “cj”). The mathematical expression for the maxprod index is as follows:
Inherent to the above formula, maxprod ≥ h × ch ≥ h-2. According to dos Santos Rubem, et al, major differences between maxprod and h-2 index can be observed in cases of atypical distributions of “cj.“
The q-2-index  represents the geometric average of h-index and the median number of citations items within items within the h-core (i.e., the so-called m-index). This specific combination helps optimize academic productivity assessments while taking advantage of favorable characteristics associated with each component index., The equation for the q-2-index of an author is as follows:
From examining the equation, it is evident that the h-index ≤ q-2-index ≤ m-index and that (q-2-index - h-index) ≤ (m-index - q-2-index) (i.e., the q-2-index values will approximate the h-index more closely than the m-index).
Another h-index derivative that regards the exact number of citations to publications in the “h-core” is the R-index. It is defined as the square root of total citations received by the publications included within the “h-core.” Mathematically, one can recognize that h ≤ R:
A more recent derivative on the h-index theme is the w-index. The w-index can be described as the highest natural number “w” of publications that have been cited at least “10 × w” times each. Mathematically, “w” represents the rank of any citation record (cw) within a publication list ranked according to a nonincreasing order of citations. Therefore, the w-index formula is as follows:
The w-index has also been referred to as the “10 h-index.”, In summary, both the w-index and h-2-index can be considered as relatively broader reflections of the “cumulative impact” of a researcher's academic output.
The “social” h-index
The “social” h-index is designed to reflect the researcher's impact on his or her scientific microcosm. Mathematically, the “social” h-index is a bit more complex than the other indices. The formula for the “social” h-index SOC h (a) of an author “a” is as follows:
where A(p) denotes the set of authors of a manuscript “p,” andP(a) denotes the set of manuscripts authored by “a.” The method utilizes h(a), the h-index of author “a,” as well as the “universe” of papers that “support” the h-index of author “a” (e.g., the “h-core”).,
The “social” h-index is one method of measuring the impact of a researcher on his or her academic sphere of influence by expanding the traditional publication “quality and quantity” considerations to include one's impact and role in furthering the careers of other scientists via collaborative and mentorship efforts. Within this general paradigm, one can choose the contribution function to assign more credit for manuscripts with higher citation counts. Alternatively, one can focus more on evaluating the contribution of a paper based on the author's record at the time of publication. Finally, one can set conditions where the author is not rewarded for contributions to his or her own h-index. Unlike the original h-index, the “social” h-index can decrease over time. This occurs, for example, when a paper, which once contributed to one author's h-index, ceases to do so. However, this is highly unlikely to occur in “real-life” data and the actual measure tends to increase over time.
The notion of “socialization” of a bibliometric parameter can also be applied to other measures of academic productivity such as the g-index. The paradigm can also be extended to help quantify not only the coauthors but also the indirect influence on other researchers. It is likely that if such “social” measures become more widely adopted, “clever” researchers may consciously or unconsciously start to “game” them. For example, the “social” h-index can be bolstered by adding junior researchers (who likely have fewer publications) as coauthors on the senior investigator's various projects.
One method of evaluating the quality of a researcher's academic output is to measure the citation rate of their articles and the quality of the outlet(s) in which the articles are published., Over time, the evaluation of the quality of various research publications has led to more formalized ranking of research journals. Such ranking paradigms are often based on the average number of citations received on per-paper basis within a given time frame. However, journal rankings may also be produced via review processes that include panels of experts or an accreditation/certification body. The evaluation of journal quality based on citation rates during a specific time period is common, but as discussed in earlier sections, it does have some drawbacks. For example, this traditional method of using citation data to measure overall journal quality does not take into account the characteristics of the citing journals or the specialty area of research. Differences in citation practices among disciplines often mean that high-quality journals may erroneously and/or unintentionally have their quality rankings underestimated (and vice-versa for low-quality journals in high-impact specialties). Accurate journal ranking is important for authors (e.g., when identifying high-quality research journals for article submissions) and for higher education providers (e.g., when deciding on journal subscription purchases). It is important that educational and scientific content selection focuses on maximizing value relative to the amount of money spent by end-users. This is where the Eigenfactor project attempts to provide practical, meaningful, and actionable bibliometric journal ranking information.,
Eigenfactor scores for journals are derived in much the same way that Google's PageRank scores are calculated for webpages. A webpage's PageRank will improve if it has lots of links to-and-from other webpages, and even more so if those links are from pages that also have a high PageRank. The Eigenfactor algorithm assigns importance to a journal, which in effect provides a weighting to the citations received from that particular journal. The “importance” of one journal is determined based on the quality of other journals that cite it, and the quality of those journals is determined based on the quality of the journals that cite them, and so on. Therefore, the Eigenfactor score is based on a “universe” of interlinked and interdependent publications, and is using much more than the simple one-to-one “citing and cited journal” algorithms. Its uniqueness provides an opportunity to utilize a large network of citations “within a field of research” as well as “between fields of research“. Within such network, citations from journals considered very important to a particular area of expertise will carry more weight than citations from journals considered less important to that field of research. The Eigenfactor score can be considered both a measure of the importance of a journal to the scientific community and an estimate of the amount of time a research journal “consumer” is likely to spend actively using that journal when researching a topic. With the assignment of relative importance to journals and the connections mapped through the research network, the Eigenfactor becomes a robust measure of journal quality that is much less susceptible to the variations in citation patterns across different fields of research., In addition to the Eigenfactor score, there is also the article influence score, which is a measure of the average “per article” impact attributable to a particular journal., Article influence is based on per-article citations and is, therefore, comparable to the JCR impact factor.
The citation data for the Eigenfactor project are sourced from the Thomson Reuters' JCR., Eigenfactor currently uses available JCR information dating back to 1995. Because these data are sourced from the JCR, in addition to the Eigenfactor subject categories, scientists also have the option of using the more familiar JCR subject categories when searching and browsing. The data are not restricted to just refereed journals, but also include references cited by other JCR-listed publications (e.g., theses, news, magazines, etc.). While the Eigenfactor journal quality assessment shares many similarities with the JCR impact factor, there are some important distinctions. For example, the Eigenfactor is calculated based on citations to articles in the last 5 years compared to the JCR impact factor, which is calculated on the basis of the preceding 2 years. Articles tend to be relatively poorly cited within the initial 2 years of their publication. Consequently, the longer time frame of the Eigenfactor provides somewhat more meaningful results, especially for those disciplines in which citations take longer to accumulate.
A novel feature available through the Eigenfactor website is journal price data from http://www.journalprices.com., Although journal prices themselves are not incorporated into the calculations used to derive either the Eigenfactor or article influence scores, pricing data are matched with bibliometric variables to estimate the journal's “scientific value” relative to its subscription price. Thus, combining the Eigenfactor scores with journal price data gives an indication of a journal's quality in terms of value provided to the consumer or scientist at a given price-point. Furthermore, the Eigenfactor incorporates journal pricing data with economic value assigned to both Eigenfactor score and article influence score. This, in turn, helps authors, readers, and institutions determine the most cost-effective approaches to knowledge dissemination and utilization.
In addition to being free and easy to use, Eigenfactor has other advantages over traditional citation metrics, including a 5-year evaluation period and the Eigenfactor algorithm itself, which reduces discipline bias and produces a more meaningful assessment of the publication's value in terms of citations. The Eigenfactor website is very transparent regarding the methodology behind relevant calculations and provides links to a number of literature sources on the subject, making it easier to understand the process.
The i10 index (and other in-indices)
The i10 index is utilized by Google Scholar citation web service.,, It measures the number of articles with 10 or more citations and is designed to supplement the h-index as a secondary assessment of academic productivity. It may be useful in identifying authors who are prolific enough to produce a large number of publications; however, such publications have not yet had sufficient time to achieve a high enough number of citations to meaningfully contribute to one's h-index. There are also some concerns that the i10 index could be manipulated due to Google Scholar's methodology., In terms of its derivatives, the i10 index can easily be modified to standardize assessment across a number of authors and disciplines by assigning any arbitrary threshold number “n” of citations against which an author (or a group of authors, institutions, and journals) could be benchmarked.
Document level metrics
With the advent of new technologies, sophisticated publisher platforms, and widespread use of social media applications, an emerging set of metrics has allowed for measuring the actual usage of a publication, including the public or social engagement at the document-level (also referred to as article-level) unit of analysis. These document-level metrics track the usage of published knowledge by evaluating the presence of citations of scientific work in a broader repertoire of journal articles, books, published audio-visual materials, software packages, conference papers, data sets, figures, and websites. There are few limitations and many potential uses of such information, and novel metrics can be generated as long as specific types of data can be captured to determine how a work is read online, downloaded, shared among others, commented upon, recommended, viewed, and/or saved on various online reference or storage platforms.,
Examples of some new (and potential) document-level metrics include:
- Online downloads of a work 
- Online views of a work ,
- Bookmarks made using online reference managers (e.g., Mendeley, Zotero )
- Mentions of a work in social network sites 
- Discussions of a work in blogs or other mass media platforms ,
- Recommendations made using conduits for sharing of written/published work 
- Comments/annotations for a work submitted to online repositories 
- Commenting platforms such as PubMed Commons  or ResearchGate.
These metrics can provide otherwise unappreciated evidence of nascent influence of a work, serve as complementary measures of impact to citations, and allow authors to highlight multiple examples of scholarly output, outside of the established realm of peer-reviewed journal articles. Document-level metrics are available from various sources and platforms such as publishers, software applications, and databases.
The Public Library of Science publishers, the first to offer document-level metrics in 2009, provide the most highly developed publisher platform for document-level data. Other publishers and repositories that also offer document-level metrics include ScienceDirect, PubMed Central, and BioMed Central. Platforms that offer data usage metrics and allow authors to share their work while providing a medium for post-publication scientific interactions include ResearchGate, Academia.edu, Google Scholar,,, SlideShare, and FigShare.
| Document-Level Metrics Versus Traditional Metrics|| |
As outlined above, the new document-level metrics, however transient, rudimentary, and/or anonymous in nature, may serve as an early indicator of the impact of a scientific work. Document-level metrics represent early-stage social or public engagement indicators of how (and by whom) a work is being shared, used, commented on, and disseminated further., Who is reading the new work? Who is tweeting about the new work? Where are they “tweeting” from? Is the work being discussed on a blog? By whom? Is the commenter a scientist or a policy-maker, or perhaps a layperson? Are users bookmarking the work in Mendeley or ResearchGate?, Is the work the topic of an article in the press? Is a user viewing slides in SlideShare? Is a user viewing figures in FigShare? For newer publications, document-level metrics may be a powerful source of data to supplement traditional methods of assessment, especially if the publication has not yet garnered citations. However, metrics based on social attention or social/public engagement should be viewed with caution until their characteristics, scientific value, and quantitative behavior are better understood.
| Academic Medicine: Promotion and Tenure Perspective|| |
The majority of academic medical institutions are directing their clinicians toward the nontenure track. Reasons for this include decreasing availability of grant funding for research, the nonclinical (and thus “nonproductive”) time required for clinician scientists to be successful in research (coupled with the increasing emphasis on maintaining clinical workload to provide fiscal sustainability), as well as the newly recognized needs of increasingly diverse faculty, with novel forms of scholarship that the Internet and modern media capabilities bring to medical education and research.,,,,,,, Practicing clinicians vary in their interests and contributions to the academic mission of an institution. Thus, while the aforementioned guidance regarding various metrics of academic promotion is helpful to an individual, not all the indices or productivity factors described in this article are universally required for promotion (or tenure). Although each metric may play a role, over the last decade academic medical centers have focused on providing their faculty members with a variety of “promotion track” options to accommodate the myriad of contributions outlined above.
For instance, at one midwestern academic medical center the nontenure pathway for faculty promotion offers three tracks: (a) the clinical scholar track, (b) the clinical educator track, and (c) the clinical excellence track. None of the tracks require the acquisition of grants (although funding is viewed positively) when a candidate is being considered for promotion or tenure. In contrast to other tracks, the clinical excellence track does not require significant publication productivity. However, all tracks require an emerging “national reputation” for promotion to Associate Professor and a “national/international reputation” for promotion to the rank of Professor. National or international extramural recognition, or emerging extramural recognition, can be documented through a combination of invited lectures as a visiting professor or speaking engagements at major conferences (i.e., outside of the university's local or regional area); holding national or international office for a professional society; chairing national or international committees, being part of an editorial board, reviewing manuscripts for professional journals, etc. Candidates for promotion should also engage in active citizenship at their respective university or medical center (e.g., committee participation) and provide documentation of good teaching performance.
The clinical scholar track requires, in addition to the above, publications and participation in a major/national research project/trial, at least as a local principal investigator. Typically, participation in one trial is required for promotion to Associate Professor and participation in another one for promotion to Professor. There is usually a predetermined minimum number of publications required for promotion (set largely by each department under the guidance of the institutional leadership). Acceptable publications include manuscripts (original studies, case reports), letters, books, book chapters, and even Internet-based media contributions that are deemed relevant by the promotions committee. The number of manuscripts/academic offerings required for advancement from Assistant to Associate Professor is usually less than the quantity required to advance from Associate Professor to Professor. As highlighted above, any grants acquired will be a significant factor in support of the candidate's promotion.
The clinical educator track differs from the clinical scholar track in that the educator track usually requires fewer manuscripts or other academic “deliverables” but does require these to be in the field of education. In addition, experience as a residency or fellowship director helps the candidates' profiles, as do the requisite teaching evaluations, which must be excellent. Furthermore, teaching at conferences, organizing conferences, or educational events may also be helpful when approaching promotion. Participation in a major/national research project/trial, at least as a local principal investigator, is usually needed for promotion to Associate Professor and then to Professor. Such projects preferably involve the field of education.
As mentioned above, the “clinical excellence” track does not require publications; however, it does require documentation of clinical metrics that either enhance the reputation of the academic institution nationally/internationally or cause a distinct and positive change in the practice of medicine at the medical center itself, which will enhance the institution's reputation, patient flow, and/or income on a national level. The key component of documenting achievement is the provision of detailed clinical excellence metrics, usually in the form of tables, charts, and/or flow diagrams.
Regardless of the academic advancement track chosen by each faculty member, one important trend in the area of faculty promotion is the use of dynamic changes within various established indices of scholarly productivity. This paradigm shift provides more objective assessment of faculty progress over time, thus demonstrating continued and sustained effort (or lack thereof). For example, the so-called “h-delta” or the rate of increase of the “h-index” (or the “h-trajectory”) over time has been proposed to correlate with a researcher's potential for scientific impact. Within this paradigm, researchers with annual changes of <1.0 in their “h-index” have been said to have “average” scientific performance, those with “h-delta” of 1.0–2.0 were categorized as “above average,” those with “h-delta” of 2.0–3.0 defined as “excellent,” and finally those with “h-delta” of >3.0 proposed to be “stellar” performers. Similar paradigms can be easily extended to nonpublication achievements (e.g., resident education, clinical excellence, etc.).
In the final analysis, the promotion committee does not look at specific departmental or university requirements as final, “stand-alone” determinants. Instead, the promotion committee has the right (and indeed the duty) to make a determination for promotion using the candidate's entire dossier in a more “generalist” fashion. There is usually an element of an intangible, cognitive perception/determination that is left to the committee's discretion. However, objective extramural referee assessments and letters of support are required and considered to be of high importance. Extramural referees who do not support the candidate for promotion can be damaging to the individual's candidacy. While applicants on the scholar and educator tracks need support letters from nationally recognized referees faculty on the clinical excellence track may be allowed to list institutional, local, and regional referees. This, of course, leaves those in the scholar and educator tracks, at times, asking questions regarding whether the clinical excellence track is an easier path to promotion. In fact, it is often not the case. The burden of evidence to demonstrate that a particular practitioner's clinical contributions have enhanced the reputation of his/her institution nationally, brought more clinical revenue, improved operational efficiency, or resulted in superior patient outcomes, may be a difficult prospect to prove.
Following rigid guidelines is generally discouraged by university administrators; however, the requirement for extramural reputation is immutable and must be evident. After the committee makes a decision, there is usually a review of that decision at the level of the Vice Dean and then the Dean of the medical school (or an equivalent administrative position at a medical center without a medical school). Negative determinations for promotion candidates may still be overturned at these senior administrative levels. Almost no positive votes in support of candidate are overturned at the higher levels. This leaves many applicants for academic advancement with the impression that, when compared to top institutional administrators, the committees for promotion tend to be tougher on candidates. Consequently, a “checks and balances” system may be present for the promotion process that generally favors the candidates. In the end, academic medical centers are trying to respect all contributions to the tripartite academic mission of teaching, research, and service from the widely diverse faculty that propel such mission forward.
| Conclusion|| |
Traditional measures to quantify academic productivity based on “counts” (number of publications, number of citations, etc.) have numerous potential shortcomings. The digital revolution has enabled the creation of sophisticated databases and software tools that provide better methods of quantifying faculty research productivity and impact. Nearly impossible to obtain until recently, dynamic metrics of faculty performance add an additional layer of granularity to objective assessment of academic achievement. Increased competition for biomedical research funding, along with a growing emphasis by funding agencies and institutions on the demonstration of meaningful and transparent outcomes, has forced academic institutions to require more objective quantification of the impact of research on knowledge diffusion, synthesis into clinical applications, and public health outcomes. Therefore, it will become increasingly important to “go beyond the numbers” to evaluate and/or justify applications for funding or requests for promotion and/or tenure. Creating a narrative that provides proper contextual background and helps to better illustrate an academician's productivity and academic impact is far more meaningful than raw bibliometric data.
In today's competitive academic milieu, it is critical that faculty members proactively “curate” themselves. The term “curate” is based on the Latin word cura, loosely translated as “care.” Researchers need to establish their presence on author profile platforms, use contemporary strategies to enhance discoverability, consider multiple avenues of dissemination, reach beyond numbers to tell a story, and efficiently track research activities and output. Tailoring the academic productivity narrative for the intended purpose is one of the keys to meaningful communication with stakeholders and successful dissemination of academic output. Medical librarians offer substantial expertise in navigating the ever-expanding array of resources that exist to create this academic productivity narrative. While publication metrics can provide compelling documentation of faculty impact, no single metric is sufficient for measuring performance, quality, or influence by any individual author. Publication data constitute but a small portion of an author's academic and research story and do not provide a truly comprehensive picture of an academician's scientific reputation or influence. Other forms of scholarly activity regarded as meaningful and impactful include competitive grants, honors and recognition awards, patents and other forms of intellectual property, teaching activities, professional organization contributions, journal editorships, advisory board activities, mentoring efforts, and community engagement.
Financial support and sponsorship
Conflicts of interest
There are no conflicts of interest.
| References|| |
Fuller CD, Choi M, Thomas CR Jr. Bibliometric analysis of radiation oncology departmental scholarly publication productivity at domestic residency training institutions. J Am Coll Radiol 2009;6:112-8.
Dietz JS, Bozeman B. Academic careers, patents, and productivity: Industry experience as scientific and technical human capital. Res Policy 2005;34:349-67.
Ramsden P. Describing and explaining research productivity. High Educ 1994;28:207-26.
McGrail MR, Rickard CM, Jones R. Publish or perish: A systematic review of interventions to increase academic publication rates. High Educ Res Dev 2006;25:19-35.
Fox MF, Mohapatra S. Social-organizational characteristics of work and publication productivity among academic scientists in doctoral-granting departments. J High Educ 2007;78:542-71.
Svider PF, Mauro KM, Sanghvi S, Setzen M, Baredes S, Eloy JA. Is NIH funding predictive of greater research productivity and impact among academic otolaryngologists? Laryngoscope 2013;123:118-22.
Prathap G. The 100 most prolific economists using the p-index. Scientometrics 2009;84:167-72.
Pandit JJ. Measuring academic productivity: Don't drop your 'h's!*. Anaesthesia 2011;66:861-4.
Narin F. Evaluative Bibliometrics: The Use of Publication and Citation Analysis in the Evaluation of Scientific Activity. Washington, D.C: Computer Horizons; 1976.
Holden G, Rosenberg G, Barker K. Bibliometrics: A potential decision making aid in hiring, reappointment, tenure and promotion decisions. Soc Work Health Care 2005;41:67-92.
Rezek I, McDonald RJ, Kallmes DF. Is the h-index predictive of greater NIH funding success among academic radiologists? Acad Radiol 2011;18:1337-40.
Yang J, Vannier MW, Wang F, Deng Y, Ou F, Bennett J, et al
. A bibliometric analysis of academic publication and NIH funding. J Informetr 2013;7:318-24.
Archambault É, Vignola-Gagne É, Côté G, Larivière V, Gingrasb Y. Benchmarking scientific output in the social sciences and humanities: The limits of existing databases. Scientometrics 2006;68:329-42.
van den Brink M, Fruytier B, Thunnissen M. Talent management in academia: Performance systems and HRM policies. Hum Resour Manage J 2013;23:180-95.
Butler L. Using a balanced approach to bibliometrics: Quantitative performance measures in the Australian Research Quality Framework. Ethics Sci Environ Polit 2008;8:83-92.
Agasisti T, Catalano G, Landoni P, Verganti R. Evaluating the performance of academic departments: An analysis of research-related output efficiency. Res Eval 2012;21:2-14.
Min LH, Abdullah A, Mohamed AR. Publish or perish: Evaluating and promoting scholarly output. Contemp Issues Educ Res 2013;6:143-6.
Leong M, Bazoune A, Wallace DR, Tang V, Seering WP. Towards a Tool for Characterizing the Progression of Academic Research. In ASME 2011 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers; 2011.
Small H, Sweeney E, Greenlee E. Clustering the science citation index using co-citations. II. Mapping science. Scientometrics 1985;8:321-40.
Garfield E. The application of citation indexing to journals management. Curr Contents 1994;33:3-5.
Thomson-Reuters. Web of Science. New York, NY: Thomson Reuters; 2010.
Price DJ. Networks of scientific papers. Science 1965;149:510-5.
Bakkalbasi N, Bauer K, Glover J, Wang L. Three options for citation tracking: Google Scholar, Scopus and Web of Science. Biomed Digit Libr 2006;3:7.
Indiana University. Scholarometer: A Social Tool to Facilitate Citation Analysis and Help Evaluate the Impact of an Author's Publications; 2015. Available from: http://www.scholarometer.indiana.edu/
. [Last accessed on 2015 Nov 20].
Younger P, Boddy K. When is a search not a search? A comparison of searching the AMED complementary health database via EBSCOhost, OVID and DIALOG. Health Info Libr J 2009;26:126-35.
Yong-Qin T. Individualized service of the EBSCOhost full text database. J Libr Inf Sci Agric 2005;9:32.
Antell K, Strothmann M, Chen X, O'Kelly K. Cross-examining google scholar. Ref User Serv Q 2013;52:279-82.
Goodman D. Web of Science (2004 Version) and Scopus. The Charleston Advisor 2005;6:5-21.
de Jong-Hofman M. Comparison of selecting, abstracting and indexing by COMPENDEX, INSPEC and PASCAL and the impact of this on manual and automated retrieval of information. Online Rev 1981;5:25-36.
Castillo C, Donato D, Gionis A. Estimating number of citations using author reputation. In: String Processing and Information Retrieval. New York: Springer; 2007.
Bird SJ. Self-plagiarism and dual and redundant publications: What is the problem? Commentary on 'Seven ways to plagiarize: Handling real allegations of research misconduct'. Sci Eng Ethics 2002;8:543-4.
Neill US. Publish or perish, but at what cost? J Clin Invest 2008;118:2368.
Birks Y, Fairhurst C, Bloor K, Campbell M, Baird W, Torgerson D. Use of the h-index to measure the quality of the output of health services researchers. J Health Serv Res Policy 2014;19:102-9.
Fye WB. Medical authorship: Traditions, trends, and tribulations. Ann Intern Med 1990;113:317-25.
Nichani AS. Whose manuscript is it anyway? The 'Write' position and number of authors. J Indian Soc Periodontol 2013;17:283-4.
Persson O, Glänzel W, Danell R. Inflationary bibliometric values: The role of scientific collaboration and the need for relative indicators in evaluative studies. Scientometrics 2004;60:421-32.
Green RG. Faculty rank, effort, and success: A study of publication in professional journals. J Soc Work Educ 1998;34:415-26.
Carpenter CR, Cone DC, Sarli CC. Using publication metrics to highlight academic productivity and research impact. Acad Emerg Med 2014;21:1160-72.
Batista PD, Campiteli MG, Kinouchi O. Is it possible to compare researchers with different scientific interests? Scientometrics 2006;68:179-89.
Redner S. How popular is your paper? An empirical study of the citation distribution. Eur Phys J B Condens Matter Complex Syst 1998;4:131-4.
Hirsch JE. An index to quantify an individual's scientific research output. Proc Natl Acad Sci U S A 2005;102:16569-72.
Cantín M, Muñoz M, Roa I. Comparison between impact factor, eigenfactor score, and SCImago journal rank indicator in anatomy and morphology journals. Int J Morphol 2015;33:1183-8.
van Raan AF, Moed H, Van Leeuwen T. Scoping Study on the Use of Bibliometric Analysis to Measure the Quality of Research in UK Higher Education Institutions. Report to HEFCE by the Centre for Science and Technology Studies, Leiden University; 2007.
Bonzi S, Snyder HW. Motivations for citation: A comparison of self citation and citation to others. Scientometrics 1991;21:245-54.
Posner RA. The Theory and Practice of Citations Analysis, with Special Reference to Law and Economics. University of Chicago Law School, John M. Olin Law and Economics Working Paper; 1999.
Garfield E. The history and meaning of the journal impact factor. JAMA 2006;295:90-3.
Kanthraj GR. Journal impact factor. Indian J Dermatol Venereol Leprol 2006;72:322-5.
Kumar V, Upadhyay S, Medhi B. Impact of the impact factor in biomedical research: Its use and misuse. Singapore Med J 2009;50:752-5.
Zitt M, Small H. Modifying the journal impact factor by fractional citation weighting: The audience factor. J Am Soc Inf Sci Technol 2008;59:1856-60.
Alberts B. Impact factor distortions. Science 2013;340:787.
Cone DC, Gerson LW. Measuring the measurable: A commentary on impact factor. Acad Emerg Med 2012;19:1297-9.
Althouse BM, West JD, Bergstrom CT, Bergstrom T. Differences in impact factor across fields and over time. J Am Soc Inf Sci Technol 2009;60:27-34.
Seglen PO. Why the impact factor of journals should not be used for evaluating research. BMJ 1997;314:498-502.
Bollen J, Van de Sompel H, Hagberg A, Chute R. A principal component analysis of 39 scientific impact measures. PLoS One 2009;4:e6022.
Eyre-Walker A, Stoletzki N. The assessment of science: The relative merits of post-publication review, the impact factor, and the number of citations. PLoS Biol 2013;11:e1001675.
Yue W, Wilson CS, Rousseau R. The immediacy index and the journal impact factor: Two highly correlated derived measures. Can J Inf Libr Sci 2004;28:33-48.
Burton RE, Kebler R. The “half-life” of some scientific and technical literatures. Am Doc 1960;11:18-22.
Owlia P, Vasei M, Goliaei B, Nassiri I. Normalized impact factor (NIF): An adjusted method for calculating the citation rate of biomedical journals. J Biomed Inform 2011;44:216-20.
Gaster N, Gaster M. A critical assessment of the h-index. Bioessays 2012;34:830-2.
Sharma B, Boet S, Grantcharov T, Shin E, Barrowman NJ, Bould MD. The h-index outperforms other bibliometrics in the assessment of research performance in general surgery: A province-wide study. Surgery 2013;153:493-501.
Vanclay JK. On the robustness of the h-index. J Am Soc Inf Sci Technol 2007;58:1547-50.
Bornmann L, Daniel HD. What do we know about the h index? J Am Soc Inf Sci Technol 2007;58:1381-5.
Egghe L. Dynamic h-index: The Hirsch index in function of time. J Am Soc Inf Sci Technol 2007;58:452-4.
Oppenheim C. Using the h-index to rank influential British researchers in information science and librarianship. J Am Soc Inf Sci Technol 2007;58:297-301.
Roediger H. The h index in science: A new measure of scholarly contribution. Acad Obs 2006;19:1-6.
Glänzel W. On the opportunities and limitations of the H-index. Sci Focus 2006;1:383-391.
Kelly CD, Jennions MD. The h index and career assessment by numbers. Trends Ecol Evol 2006;21:167-70.
Van Raan AF. Comparison of the Hirsch-index with standard bibliometric indicators and with peer judgment for 147 chemistry research groups. Scientometrics 2006;67:491-502.
Zhang CT. Relationship of the h-index, g-index, and e-index. J Am Soc Inf Sci Technol 2010;61:625-8.
Egghe L. Theory and practice of the g-index. Scientometrics 2006;69:131-52.
Egghe L. An improvement of the h-index: The g-index. ISSI Newsl 2006;2:8-9.
Zhang CT. The e-index, complementing the h-index for excess citations. PLoS One 2009;4:e5429.
Schreiber M. An empirical investigation of the g-index for 26 physicists in comparison with the h-index, the A-index, and the R-index. J Am Soc Inf Sci Technol 2008;59:1513-22.
Tol RS. A rational, successive g-index applied to economics departments in Ireland. J Informet 2008;2:149-55.
Jin B. H-index: An evaluation indicator proposed by scientist. Sci Focus 2006;1:8-9.
Jin B, Liang L, Rousseau R, Egghe L. The R- and AR-indices: Complementing the h-index. Chin Sci Bull 2007;52:855-63.
Kosmulski M. MAXPROD – A new index for assessment of the scientific output of an individual, and a comparison. Cybermetrics 2007;11:1-5.
Wu Q. The w-Index: A Significant Improvement of the h-Index. arXiv Preprint arXiv: 0805.4650; 2008.
dos Santos Rubem AP, de Moura AL. Comparative analysis of some individual bibliometric indices when applied to groups of researchers. Scientometrics 2015;102:1019-35.
Alonso S, Cabrerizo FJ, Herrera-Viedma E, Herrera F. hg-index: A new index to characterize the scientific output of researchers based on the h-and g-indices. Scientometrics 2009;82:391-400.
Cabrerizo FJ, Alonso S, Herrera-Viedma E, Herrera F. q2-Index: Quantitative and qualitative evaluation based on the number and impact of papers in the Hirsch core. J Informetr 2010;4:23-8.
Schreiber M. A modification of the h-index: The h m-index accounts for multi-authored manuscripts. J Informetr 2008;2:211-6.
Bornmann L, Mutz R, Daniel HD. Are there better indices for evaluation purposes than the h index? A comparison of nine different variants of the h index using data from biomedicine. J Am Soc Inf Sci Technol 2008;59:830-7.
Cormode G, Ma Q, Muthukrishnan S, Thompson B. Socializing the h-index. J Informetr 2013;7:718-21.
Rousseau R, Ye FY. A proposal for a dynamic h-type index. J Am Soc Inf Sci Technol 2008;59:1853-5.
Ye F, Rousseau R. Probing the h-core: An investigation of the tail-core ratio for rank distributions. Scientometrics 2009;84:431-9.
Bergstrom CT, West JD, Wiseman MA. The Eigenfactor metrics. J Neurosci 2008;28:11433-4.
Fersht A. The most influential journals: Impact factor and Eigenfactor. Proc Natl Acad Sci U S A 2009;106:6883-4.
Crisp MG. Eigenfactor. Collect Manage 2008;34:53-6.
Bergstrom CT, West JD. Assessing citations with the Eigenfactor metrics. Neurology 2008;71:1850-1.
Yu P, Van de Sompel H. Networks of scientific papers. Science 1965;169:510-5.
Rizkallah J, Sin DD. Integrative approach to quality assessment of medical journals using impact factor, eigenfactor, and article influence scores. PLoS One 2010;5:e10204.
Garfield, E. Use of Journal Citation Reports and Journal Performance Indicators in measuring short and long term journal impact. Croatian Medical Journal
Leydesdorff L. Can scientific journals be classified in terms of aggregated journal-journal citation relations using the journal citation reports? J Am Soc Inf Sci Technol 2006;57:601-3.
Ascaso FJ. Impact factor, eigenfactor and article influence. Arch Soc Esp Oftalmol 2011;86:1-2.
Delgado López-Cózar E, Robinson-García N, Torres-Salinas D. The Google Scholar experiment: How to index false papers and manipulate bibliometric indicators. J Assoc Inf Sci Technol 2014;65:446-54.
Lopez-Cozar ED, Robinson-García N, Torres-Salinas D. Manipulating Google Scholar Citations and Google Scholar Metrics: Simple, Easy and Tempting. arXiv: 1212.0638; 2012.
Jacsó P. Google Scholar author citation tracker: Is it too little, too late? Online Inf Rev 2012;36:126-41.
Lin J, Fenner M. Altmetrics in evolution: Defining and redefining the ontology of article-level metrics. Inf Stand Q 2013;25:20.
Boyack KW, Klavans R. Co-citation analysis, bibliographic coupling, and direct citation: Which citation approach represents the research front most accurately? J Am Soc Inf Sci Technol 2010;61:2389-404.
Bik HM, Goldstein MC. An introduction to social media for scientists. PLoS Biol 2013;11:e1001535.
Bahner DP, Adkins E, Patel N, Donley C, Nagel R, Kman NE. How we use social media to supplement a novel curriculum in medical education. Med Teach 2012;34:439-44.
PLOS. PLOS: Open for Discovery; 2015. Available from: https://www.plos.org/
. [Last accessed on 2015 Dec 05].
Haustein S, Peters I, Sugimoto CR, Thelwall M, Larivière V. Tweeting biomedicine: An analysis of tweets and citations in the biomedical literature. J Assoc Inf Sci Technol 2014;65:656-69.
Klavans R, Boyack KW. Using global mapping to create more accurate document-level maps of research fields. J Am Soc Inf Sci Technol 2011;62:1-18.
Konkiel S, Piwowar H, Priem J. The imperative for open altmetrics. J Electron Publ 2014;17:1.
Hahnel M. Exclusive: Figshare a new open data project that wants to change the future of scholarly publishing. Impact Soc Sci Blog. 2012 Jan 18.
Bunton SA, Mallon WT. The continued evolution of faculty appointment and tenure policies at U.S. medical schools. Acad Med 2007;82:281-9.
Kubiak NT, Guidot DM, Trimm RF, Kamen DL, Roman J. Recruitment and retention in academic medicine – What junior faculty and trainees want department chairs to know. Am J Med Sci 2012;344:24-7.
Bickel J. What can be done to improve the retention of clinical faculty? J Womens Health (Larchmt) 2012;21:1028-30.
Krupat E, Pololi L, Schnell ER, Kern DE. Changing the culture of academic medicine: The C-Change learning action network and its impact at participating medical schools. Acad Med 2013;88:1252-8.
Villablanca AC, Beckett L, Nettiksimmons J, Howell LP. Improving knowledge, awareness, and use of flexible career policies through an accelerator intervention at the University of California, Davis, School of Medicine. Acad Med 2013;88:771-7.
Anderson MG, D'Alessandro D, Quelle D, Axelson R, Geist LJ, Black DW. Recognizing diverse forms of scholarship in the modern medical college. Int J Med Educ 2013;4:120.
Pickering CR, Bast RC Jr., Keyomarsi K. How will we recruit, train, and retain physicians and scientists to conduct translational cancer research? Cancer 2015;121:806-16.
Evans DC, Firstenberg MS, Galwankar SC, Moffatt-Bruce SD, Nanda S, O'Mara MS, et al
. International journal of academic medicine: A unified global voice for academic medical community. Int J Acad Med 2015;1:1.
[Figure 1], [Figure 2], [Figure 3], [Figure 4]