Research impact (Bibliometrics & Altmetrics)

The content of this Web site is licensed under the Creative Commons License (CC BY NC ND  Lebrand C.- BiUM library- 2016) unless otherwise noted.

Bibliometrics are designed to quantitatively assess the quality of published research using scientometric indicators. Various metrics exist, from conventional indicators (Journal Impact Factor (JIF), number of published articles, citations per publication, h-index, …), to the more recent altmetrics, which measure the number of times published articles are downloaded or mentioned on the web.

 

Definition of citation-based metrics: Journal impact factor, Citation Impact indicator, h-index

 

Journal Impact Factor (JIF)

The bibliometrics InCitesTM Journal Citation Reports application from Thomson Reuters provides the official journal Impact Factor ranked by discipline.

  • definition & calculation

    The journal Impact Factor (JIF) is the average number of times articles from the journal published in the past two years have been cited in the JCR year. It is calculated by dividing the number of citations in the JCR year by the total number of articles published in the two previous years. An Impact Factor of 2.5 means that, on average, the articles published one or two year ago have been cited two and a half times. The citing works are from journals, proceedings, or books indexed by Web of Science.

    The 2014 impact factor of a journal would be calculated as follows:

    2014 impact factor =A/B.

    where:

    A= the number of times that all items published in that journal in 2012 and 2013 were cited by indexed publications during 2014.

    B= the total number of citable items published by that journal in 2012 and 2013.

    (Note that 2014 impact factors are actually published in 2015; they cannot be calculated until all of the 2014 publications have been processed by the indexing agency).

The journal impact factor (JIF) is often misused to evaluate the quality of published work. Indeed, this index only allows to estimate the quality of a scientific journal as a whole, and not the quality of its articles taken individually. High impact factors of journals often result from a few items being highly cited, artificially raising the overall average, while the majority of items of the same journal may have a number of citations falling well below the average given by the JIF (Wrong Number: A closer look at Impact Factors). Thus, the JIF does not give a correct estimation of the number of citations an article will get when published in this journal. A calculation taking into account the median rather than the average would be a better factor to make an estimate (Wrong Number: A closer look at Impact Factors). Because of its unpredicted nature and its misuse in evaluating researchers, a movement in academic research now calls for stopping the use of this factor and for signing the 2013 San Francisco Declaration on Research Assessment (DORA) and to adhere The Leiden Manisfesto. Several institutions wishing to develop a system for responsible evaluation consider that the IF is perverting the research system and quality (see the metric tide: executive summary).

Citation Impact indicator

The bibliometrics Web of Science application from Thomson Reuters and Google Scholar application provide citation Impact indicators.

  • definition & calculation

    One of the most commonly used bibliometric indicators for evaluating sets of documents is the citation Impact indicator (also called “citations per publication” or “average citation rate”).

    Citation Impact of a set of documents is calculated by dividing the total number of citations by the total number of publications. Citation Impact shows the average number of citations that a document has received.

    Citation Impact has been extensively used as a bibliometric indicator in research performance evaluation and can be applied at all organizational levels (author, institution, country/region, research field or journal). However, there are limitations to the indicator. For example, it ignores the total volume of research outputs.

While the citation Impact indicator is a better measure than the journal impact factor (JIF) to assess the intrinsic quality of a published work, the researcher should remain cautious and careful in interpreting these numbers. There may be strong differences in citation counts depending on the way they were calculated. Web of Science only includes citations originating from articles published in journals with official impact factor, while Google Scholar also considers citations from the grey literature (all types of magazines, thesis,…). In addition, the rate of citation will clearly depend on the size of the community working on similar research and is therefore dependent on the discipline in which the research is carried out. Thus, citation counts that normalize the citation rate of an item by type of discipline (Normalized Citation Impact) are more pertinent and should be used for an equitable evaluation process (The Leiden Manisfesto). Moreover, the citation impact of an article should be standardized according to the authorship position to more accurately assess the level of participation and involvement of the researcher. Finally, the citation Impact indicator also seems to depend on the prestige of the journal in which the work is published. Indeed, the study ‘Clinical Trial Registration — Looking Back and Moving Ahead‘ shows that the same clinical trial published in various journals received a number of citations per publication which varied according to the JIF of the journals ( https://stuartcantrill.com/2016/01/23/imperfect-impact/ ). Thus, an index that standardizes the citation rate of an article depending on the journal in which it is published (Journal Normalized Citation Impact) puts into perspective this influence and allows to appreciate the performance of the article in a given journal. Normalized citation impact (NCI)

Normalized citation impact (NCI)

The bibliometrics InCites 2.1 application from Thomson Reuters provides the normalized citation impact by discipline.

Citation rates vary across disciplines, citations grow over time, and different publication types have different citation behaviors. For accurate and fair research assessment, citation data should be normalized by discipline, year and publication type.

  • calculation

    The Normalized Citation Impact (NCI) of a single publication is calculated by dividing the actual count of citing items by the expected citation rate (baseline) for publications with the same document type, year of publication and subject area. When a document is assigned to more than one subject area, an average of the ratios of the actual to expected citations is used. The NCI of a set of documents, for example, the collected works of an individual, institution or country, is the average of the NCI values for all the documents in the set.

NCI is a valuable and unbiased indicator of impact irrespective of age, subject focus or document type. Therefore, it allows comparisons between entities of different sizes and different subject mixes. An NCI value of one represents performance at par with world average, values above one are considered above average and values below one are considered below average. An NCI value of two is considered twice world average.

NCI is an ideal indicator for benchmarking at all organizational levels (author, institution, region etc). One can also use NCI to identify impactful sub-sets of documents and assess any research activity. For example, an institution may use the NCI to assess which collaborations are the most impactful or identify new potential collaboration opportunities. Or to identify the performance of up-and-coming researchers compared to established ones and to aid with faculty recruitment by assessing candidates. As a funding organization, one may use the NCI as a quantitative performance indicator to monitor the performance of funded projects, or assess the track record of a research teams applying for a new funding.

Journal normalized citation impact

The bibliometrics InCites 2.1 application from Thomson Reuters provides the journal normalized citation impact by discipline.

The Journal Normalized Citation Impact (JNCI) indicator is a similar indicator to the Normalized Citation Impact, but instead of normalizing per subject area or field, it normalizes the citation rate for the journal in which the document is published.

  • calculation

    The Journal Normalized Citation Impact of a single publication is the ratio of the actual number of citing items to the average citation rate of publications in the same journal in the same year and with the same document type. The JNCI for a set of publications is the average of the JNCI for each publication.

The JNCI indicator can reveal information about the performance of a publication (or a set of publications) in relation to how other researchers perform when they publish their work in a given journal (or a set of journals). It can provide the answers to questions, such as “How do my papers perform in the journals I publish?” If the numerical value of the JNCI exceeds one, then the assessed research entity is performing above average. If it is less than one, then it is performing below the average.

h-index

The bibliometrics Web of Science application from Thomson Reuters and Google Scholar application provide the h-index indicator.

  • definition & calculation

    The h-index (also known as Hirsch index) was introduced by J. Hirsch in 2005 and can be defined as follows: A researcher has an h-index, if he/she has at least h publications for which he/she has received at least h citations. For example, Researcher A has an h-index = 13 if he/she has published at least 13 documents for which he/she has received at least 13 citations. Its popularity as a bibliometric indicator has derived from the fact that it combines productivity (number of documents) and impact (number of citations) in one index.

The h-index can be applied to any level of aggregation (author, institution, journal, etc.) and it can reveal information about how the citations are distributed over a set of documents. At the author level, it is considered to be an indicator of a researcher’s lifetime scientific achievements. Some clear advantages of the h-index are that it is a mathematically simple index, it encourages large amounts of impactful research work while at the same time discourages publishing unimportant output and that single highly cited publications do not influence the h-index (unlike the Citation Impact).

It is important to keep in mind that the h-index is proportional to the duration of a researcher’s career. Therefore, early career researchers have a disadvantage when compared to more senior researchers since the latter would have more articles and would have received more citations per publication. Also, the h-index varies by field: life scientists top out at 200; physicists at 100 and social scientists at 20–30 (Hirsch, J. E. Proc. Natl Acad. Sci. USA 102, 16569–16572 -2005). Finally, the h-index varies depending of the database, with higher h-index in Google Scholar as compared to Web of Science.

References:

To know more about indicators calculation read the InCites_MetricsGuideBook_web

 

Definition of Web-based metrics: altmetrics

Video “A beginner’s guide to altmetrics”

Altmetrics, proposed as an alternative to more traditional citation metrics, is a new measure that allows to indirectly assess the impact of an article based on the number of views of a paper, on discussions on the web or social networks, bookmarks of the paper, citations and recommendations.

  • definition & calculation

    Public Library of Science and Impactstory have created similar classifications of altmetrics based on the following criteria:

    • Views – HTML views and PDF downloads
    • Discussion – journal comments, blogs, Wikipedia, Twitter, Facebook and other social media
    • Saved – Mendeley, CiteULike and other social bookmarks
    • Cited – citations in the scholarly literature, tracked by Web of Science, Scopus, Google Scholar, Wiki and others
    • Recommended – for example used by the journal F1000Prime

Altmetrics are described as a new class of metrics that helps in providing societal impact such as the influence of research on public policy or culture, the introduction of lifesaving health interventions, and contributions to innovation and commercialization (EDUCAUSE Review 51, no. 2, 2016). However, altmetrics may show influence or engagement, rather than direct impact on the progress of science. In addition, altmetrics do not allow to anticipate in detail if the impact of the publication is positive or negative.

Therefore, new metrics are still needed in order to better assess the impact of a given research on innovation and therapeutics progresses. “Indicators such as patent citations and clinical guideline citations may have potential in some fields for quantifying impact and progression” (the metric tide: executive summary). New metrics are also needed to assess the transparency, the quality and the reproducibility of published studies to support Open Science (see badges to acknowledge Open Practices from the COS and Amsterdam Call for Action on Open Science).

Bibliometrics toolkit

  • How to assess the impact of my research?
  • What are the different kinds of metrics (indicators) used classically in academia?
  • What are altmetrics?
  • How to evaluate research output in a more ethical way?
  • How to change assessment, reward and evaluation systems for a transition to Open Science?
Research & Publications Officer
Dr. Cécile Lebrand
Tél. +41 (0)21 314 50 81
Cecile.lebrand@chuv.ch
 

Services

FBM/CHUV researchers who would like to assess their research performances using advanced metrics can contact us. We will provide you guidance on how to use classical indicators, normalized indicators and almetrics to improve your CV and visibility. We will help you for a better understanding and fair use of metrics. Trainings concerning advanced bibliometrics are provided by our service on a regular basis (check the CHUV calendar).

Tools

Ask us about journal indicators (JIF) that take into account the ranking of journals by disciplines (InCitesTM Journal Citation Reports), article indicators (Citation Impact for each individual article using Web of Science, Google Scholar or Altmetrics), and author indicators (Citation Impact for a set of publication and h-index for an author using Web of Science and Google Scholar).

Consult us for the use of the innovative tool InCites2.1 from Thomson Reuters, especially tested and selected by us to help researchers assess more precisely the strength of their research. This application offers normalized citation impact and journal normalized citation impact indicators as well as collaboration indicators. Altmetric Explorer is also freely available to our service and allows searching for publications from a given author, journal, topic or PubMed search and downloading altmetric data for these publications. Our service can provide you with extensive metric analyses at the individual level (preparation of job/funding applications) or at the department/institute level (preparation of department/institute annual reports).

images

Journal Impact Factor

To know the journal indicators for a journal of interest (JIF, Eigenfactor score,…), search for the journal using the “Go to Journal Profile” option in InCitesTM Journal Citation Reports. In the list of results, the JIFs of the journal are given for each year.

To compare journals:

  • Ranking between known journals use the “select Journals” option.
  • Ranking of journals by disciplines use the “select discipline” option.
  • Ranking of journals by disciplines and JIF quartile use the “select discipline” and “JIF quartile”options.

Citations

To know the citation impact of an individual article, search for a publication using the Search or Cited reference search options in the WoS TM Core Collection. In the list of results, the citations per publication are given for each individual article ‘Times cited’. By clicking on the number, you can see all the details concerning the citing articles.

h-index

To know about individual author indicators (citation impact and h-index), search a name using the ‘Author search’ option in the WoS TM Core Collection. Enter the author name and add author Name Variants if required. Optionally, you can select the author’s Organization if you wish to limit your search. In the list of results, select the publication records corresponding to the author of interest by ticking the appropriate boxes. To view author indicators (citation impact and h-index) select ‘Create citation report’ at the top right.

ResearcherID helps you manage your author indicators and can easily be connected to your ORCID iD. After creating your ResearcherID in ‘My Tools’, you can regularly update your list of publication indexed by WoS or manually add your papers. For more details on how to create your ResearcherID click here.

Normalized citations: Incites 2.1

Video from Thomson reuters:

Advanced bibliometry is available through Thomson Reuters separate product Incites2, which  can help evaluate output, performance and trends at the individual, lab, department, institution and global levels. InCites 2.1 allows assessing authors performance in a specific discipline or in a specific journal, and evaluating the performance of your collaborative network.

To know more about Web of Science and Incites indicators calculation read the InCites_MetricsGuideBook_web

329808-36cf0f9d8109f71aae703cf99ae68c1e

Citations

To know about the citation impact of an individual article, search for a publication using the ‘Search option in Google scholar. In the list of results, the citations per publication are given for each individual article ‘cited by’. By clicking on ‘cited by’ you can see all details concerning the citing articles.

h-index

To know about individual author indicators (citation impact, h-index and i10-index= number of publications with at least 10 citations), search a name using the ‘Search option in Google scholar. If a user profile for the author exists, it will show up at the top of the results list. Click on the user profile to visualize its indicators.

User profile in Google Scholar helps you manage your author indicators. For more details on how to create your Google Scholar profile click here.

 

Almetric

 

altmetric_logo_large

Article indicator

What does Altmetric do?

Altmetric is a free web service collecting and collating information across dozens of different platforms and multiple websites to provide researchers with a single visually informative view of the online activity surrounding their publications.

Altmetric begins collating the online mentions and shares of your research across several sources (blogs, social media, mainstream media, ….) as soon as it is published, giving you an immediate feedback on how it is being received that complements your article citation data.

The altmetric score and donut allow to easily identify how much and what type of attention a research output has received. Donuts are regularly used on publisher, institutional repositories, or laboratory publications pages. Clicking on the donut gives you the recording details of the publication outputs. Altmetric also offers the possibility to researchers to add a Bookmarklet for free on their webpage to get article level metrics for their publications.

Impacstory

 

Impactstory-logo-2014

Author indicators

Impactstory is an open-source, free web-based tool that helps scientists explore and share the diverse impacts of all their research products. By helping scientists tell data-driven stories about their altmetric impacts, they are building a new scholarly reward system.

Impacstory is committed to:

  • open source
  • free and open data, to the extent permitted by data providers
  • radical transparency and open communication

Impactstory helps you discovering almetric indicators for your publications and can easily be connected to your ORCID iD.

 

Responsible metrics and Open Science

Recruitment, promotion and tenure track decisions often rely on classical bibliometric indicators for the assessment of research quality and impact. Although quantitative measures may be necessary in the evaluation process, we encourage individual researchers to be cautious and deepen their understanding of the metrics that are generally used.

DORA & Leiden Manifesto

A general reflection has led in recent years to redefine more impartial assessment methods (see the 2013 San Francisco Declaration on Research Assessment DORA and the Leiden Manifesto for research metrics). A list of ten principles has been defined to guide research evaluation:

  • Quantitative evaluation should support qualitative, expert assessment.
  • Measure performance against the research missions of the institution, group or researcher. Programme goals should be stated at the start, and the indicators used to evaluate performance should relate clearly to those goals.
  • Protect excellence in locally relevant research.
  • Keep data collection and analytical processes open, transparent and simple.
  • Allow those evaluated to verify data and analysis.
  • Account for variation by field in publication and citation practices.
  • Base assessment of individual researchers on a qualitative judgement of their portfolio.
  • Avoid misplaced concreteness and false precision.
  • Recognize the systemic effects of assessment and indicators. Indicators change the system through the incentives they establish. These effects should be anticipated. This means that a suite of indicators is always preferable — a single one will invite gaming and goal displacement.
  • Scrutinize indicators regularly and update them.



Metric tide

We also strongly encourage you to read the independent review of the role of metrics in research assessment and management “the metric tide: executive summary”. This report recommends the involvement of peer reviews throughout the research assessment process and emits strong reserves on the abuse of metrics.

Metrics should support, not supplant, expert judgement. Peer review is not perfect, but it is the least worst form of academic governance we have, and should remain the primary basis for assessing research papers, proposals and individuals.

Responsible metrics are proposed as a way of framing appropriate uses of quantitative indicators in the governance, management and assessment of research and should englobe the following dimensions:

  • Robustness: basing metrics on the best possible data in terms of accuracy and scope;
  • Humility: recognising that quantitative evaluation should support – but not supplant – qualitative, expert assessment;
  • Transparency: keeping data collection and analytical processes open and transparent, so that those being evaluated can test and verify the results;
  • Diversity: accounting for variation by field, and using a variety of indicators to support diversity across the research system;
  • Reflexivity: recognising systemic and potential effects of indicators and updating them in response.



Call for Open Science

Importantly, in the Amsterdam Call for Action on Open Science (April 5-6 2016), Europe asks for new assessment, reward and evaluation systems to remove barriers to open science.

Open science presents the opportunity to radically change the way we evaluate, reward and incentivise science. Its goal is to accelerate scientific progress and enhance the impact of science for the benefit of society. By changing the way we share and evaluate science, we can provide credit for a wealth of research output and contributions that reflect the changing nature of science.

The assessment of research proposals, research performance and researchers serves different purposes, but often seems characterised by a heavy emphasis on publications, both in terms of the number of publications and the prestige of the journals in which the publications should appear (citation counts and impact factor). This emphasis does not correspond with our goals to achieve societal impact alongside scientific impact. The predominant focus on prestige fuels a race in which the participants compete on the number of publications in prestigious journals or monographs with leading publishers, at the expense of attention for high-risk research and a broad exchange of knowledge. Ultimately this inhibits the progress of science and innovation, and the optimal use of knowledge.

Concrete actions have to be taken to create new systems that really deal with the core of knowledge creation and account for the impact of scientific research on science and society at large, including the economy, and incentivise citizen science.

  • National authorities, European Commission and research funders: reform reward systems, develop assessment and evaluation criteria, or decide on the selection of existing ones (e.g. DORA for evaluations and the Leiden Manifesto for research metrics), and make sure that evaluation panels adopt these new criteria.
  • Research Performing Organisations, research funders and publishers: further facilitate and explore the use of so-called alternative metrics where they appear adequate to improve the assessment of aspects such as the impact of research results on society at large. Experiment with new approaches for rewarding scientific work.
  • Research communities, research funders and publishers: develop and adopt citation principles for publications, data and code, and other research outputs, which include persistent identifiers, to ensure appropriate rewards and acknowledgment of the authors.
  • Research communities and publishers: facilitate and develop new forms of scientific communication and the use of alternative metrics

In conclusion, using responsible and fair bibliometrics during the evaluation process for research quality and impact involves the careful use of a whole range of complementary indicators. Best practice uses multiple indicators to provide a more robust and pluralistic picture. Normalized indicators are required, and the most robust normalization method is based on percentiles: each paper is weighted on the basis of the percentile to which it belongs in the citation distribution of its field (the top 1%, 10% or 20%, for example). Importantly, responsible metrics should respect the principles of robustness, humility, transparency, diversity and reflexivity. Last but not least, we should not forget that the only way to properly assess the quality of a published work is actually to read the work…

Point of view: How open science helps researchers succeed (McKiernan et al. (2016) eLife)

 

Bibliometrics evaluation at the Faculty of Biology and Medicine

Responsable bibliomics
Dr. Nathalie Magnenat
Tél. ++41 (0)21 692 50 92
bibliomics@unil.ch

Bibliometrics evaluation is one of the criteria taken into consideration for the recruitment, promotions, renewal of terms or resources allocation for UNIL and CHUV researchers working at the FBM. It is important to emphasize that the bibliometrics evaluation at FBM is only one factor among others in the overall assessment of the quality of FBM scientific production and does not replace the assessment of qualitative peer review. The bibliometrics evaluation process at FBM is transparent and uses a whole range of complementary and normalized indicators (consult FBM bibliometrics evaluation unit web site).

Bibliometrics analyses are based on the publications entered in MyUNIL-application and validated in Serval. It is therefore essential to keep up to date your publications in the UNIL/CHUV institutional repository Serval.

Only articles published in journals with an official Thomson Reuters journal impact factor (JIF) are considered. To access the list of journals with official JIFs consult InCites Journal Citation Report (JCR). The publications of the current year, “Epub ahead of print” or type “author reply”, books and book chapters, and conferences abstract are not considered for evaluation. Publications where you are co-first, co-second, or last co-corresponding author should be mentioned by writing to bibliomics@unil.ch.

Updated: 20.10.2023