Responsible metrics

It is important to apply metrics responsibly. Because bibliometrics measure impact based on the number of citations, they are not always a measure of research quality. They can also be susceptible to manipulation and misuse. For instance, bibliometrics can favour established researchers over early career researchers, do not take into account the context of the citation (e.g. negative citations), prioritise journal articles over other publication types, and are not as meaningful for disciplines which have varied research outputs and no tradition of journal article publication or citation. 

Here at the University, we are committed to best practice in research assessment, advancing our maturity towards metrics-based evaluation and taking an active stand within the sector to value quality research regardless of how and where it is published. Go to our statement on the responsible use of metrics

We are also a signatory of DORA, the San Francisco Declaration on Research Assessment

In addition to DORA, the research community has made further statements on best practice including: 

Common research metrics

There are a wide variety of metrics and they can be useful in different ways. Here are a few of the most common, but you can find out more about these and other metrics including how they are calculated and how they should be applied through the metrics toolkit.

Author metrics

Author metrics can show your contribution to research through the number of articles, conference papers, books, or other outputs you have published. These metrics can demonstrate career development or support funding bids, but they should be used alongside other criteria.   

Types of author metrics include: 
  • h-index

    What is it?

    The number of papers ‘h’ that have been cited ‘h’ times. So, if you have an ‘h’ index of 15, you have 15 papers that have all been cited at least 15 times. It is a numerical indicator of how productive and influential a researcher is and can show the contribution the researcher has made to a particular field.    

    Where can I find it?

    Google Scholar, Scopus, and Web of Science.

  • Times cited

    What is it?

    The number of times an article or other research output is cited by other articles or outputs.      

    Where can I find it?

    Google Scholar, Scopus, and Web of Science

Journal, article and output metrics

Journal, article and output metrics are measures of a journal, article or output’s citations and may influence where you decide to publish. However, they vary enormously depending on the citation patterns within a discipline and should only be compared between journals, articles or outputs within the same discipline.  

Types of journal metrics include: 
  • CiteScore

    What is it?

    In any given year, the CiteScore of a journal is the number of citations, received in that year and in previous three years, for documents published in the journal during the total period (four years), divided by the total number of published documents (articles, reviews, conference papers, book chapters, and data papers) in the journal during the same four-year period. 

    Where can I find it?

    In the ‘Source Detail’ pages of Scopus

  • Field Weighted Citation Impact (FWCI)

    What is it?

    FWCI is the number of citations received by an output divided by the expected number of citations for similar outputs, i.e. those of the same type (article, book, conference paper) in the same discipline and of the same age.  

    If the FWCI is exactly 1, it means that the output performs just as expected for the global average. If it is more than 1 then it is cited more than expected. For example 1.48 means 48% more citations than expected, and if it is less than 1, it is cited less than expected according to the global average.  

    Where can I find it?

    FWCI is calculated in SciVal using data from Scopus.  

  • Journal impact factor (JIF)

    What is it?

    JIF is the average number of citations for articles in a particular journal from the preceding years. It is equal to the number of citations received in a single year, for articles published in the journal during the preceding two years, divided by the total number of articles published in the journal during the preceding two years. It is based on publications indexed in Web of Science. 

    Where can I find it?

    In the journal citation reports in Web of Science.   

  • Scimago journal rank (SJR)

    What is it?

    SJR is the average number of ‘weighted’ citations received in a particular year by articles published in a given journal in the preceding three years based on journals indexed in Scopus. It is calculated annually.  

    Where can I find it?

    In the ‘Source Detail’ pages of Scopus

  • Source normalized impact per paper​ (SNIP)

    What is it?

    SNIP measures field-specific differences in citation practices and therefore enables the direct comparison of journals in different fields. It is calculated annually using data from Scopus.  

    Where can I find it?

    In the ‘Source Detail’ pages of Scopus

Group and institutional metrics

Group and institutional metrics can be used to examine the performance of research groups and institutions and compare them with similar groups in other institutions.  

Some common tools for group metrics are SciVal and InCites. SciVal uses data from Scopus while InCites uses data from the Web of Science.  

These metrics are used to rank universities by various criteria, examples include THE World University RankingsAcademic Ranking of World Universities, and QS World University Rankings.

For more information about how Manchester Met uses these tools, please email Research Intelligence at research-intel@mmu.ac.uk

Alternative metrics

Alternative metrics measure the usage of research and scientific results outside the traditional academic environment. They have been developed as a response to the limitations of bibliometrics and are designed to complement these metrics, not replace them. Where bibliometrics use citation patterns to quantify impact, alternative metrics offer a qualitative approach. They look at how research is influencing policy, society, culture, environment, and technology. Some common tools include Altmetrics and PlumX.  

Alternative metrics analyse social media profiles, news articles, blogs and researcher profile sites such as Academia.edu and ResearchGate.

Deciding where to publish

Choosing a journal or publisher can be difficult. Here are some tips to help you identify trusted publishers. Look at: 

  • What you read: do you read and cite the journal or publication? 

  • Other researchers: do prominent researchers in your field and your colleagues submit to the journal or publisher?  

  • Indexing: is the journal indexed in known databases such as Web of Science, Scopus, or the Directory of Open Access Journals? 

  • Peer review: is there a clear peer review process? What kind? 

  • Editorial board: does the editorial board contain researchers you have heard of and would want to submit work to? 

Think. Check. Submit. is a useful tool to help you think through what to consider such as the peer review process, the editorial board, open access options, and fees.