The GII-GRIN-SCIE (GGS) Conference Rating


< Search the Ratings
Download The GGS Conference Rating 2021 (Excel .xlsx file) - last updated: October 24, 2021

Goals

Conference papers are important to computer scientists. Research evaluation is important to Universities and policy makers. This initiative is sponsored by GII (Group of Italian Professors of Computer Engineering), GRIN (Group of Italian Professors of Computer Science), and SCIE (Spanish Computer-Science Society). The goal of this initiative is to develop a unified rating of computer science conferences.

Disclaimer

We realize that using bibliometric indicators may introduce distortions in the evaluation of scientific papers. We also know that the source rankings may have flaws and contain errors. It is therefore unavoidable that the unified rating that we publish in turn contains errors or omissions. Our goal was to limit these errors to the minimum, by leveraging all of the indicators that were available at the sources, and by combining them in such a way to reduce distortions. We expect that in the majority of cases our algorithm classifies conferences in a way that reflects quite closely the consideration of that conference within the international scientific community. There might be cases in which this is not true, and these may be handled using feedback from the community.


Changelog

  • September 1, 2021 - the 2021 update to the GII-GRIN-SCIE rating is online. This incorporates the new CORE 2021 ratings, and a full update of Microsoft Academic. IMPORTANT: the LiveSHINE project was discontinued since Google Scholar stopped granting automated bulk access to information about conferences and their citations. Therefore, in this update it was not possible to update Google Scholar-derived indicators.
  • June 8, 2018 - the 2018 update to the GII-GRIN-SCIE rating is online. This incorporates the new CORE 2018 ratings, and a full update of Microsoft Academic and LiveSHINE.
  • June 1, 2017 - final version for 2017;
  • April 22, 2017 - third preliminary version, incorporates the new CORE 2017 rating;
  • March 3, 2017 - second preliminary version of the 2017 update, to incorporate a number of changes in Microsoft Academic;
  • February 23, 2017 - first preliminary version of the 2017 update to the joint GII-GRIN-SCIE rating;
  • March 1, 2015 - added references to a collection of comments to this proposal sent to the Joint Committee by GII and GRIN members, and to the response to these comments prepared by the Committee;
  • January 24, 2015 - new version of the rating (Jan-24), changed to address comments made in the joint meeting of November, 7 2014; more specifically:
    • to make tier 3 (B, B- conferences) larger, the thresholds for H-like indexes in MAS and SHINE were increased;
    • based on comments from colleagues, a few of the three-classes-to-one translation rules have been fixed to remove inconsistencies wrt the translation algorithm described below;
    • based on comments from colleagues, a few entity-resolution errors have been fixed (by merging records with different names/acronyms that represent the same event).
  • October 30, 2014 - first version of the rating (Oct-16)

The Rating Algorithm

The Sources

The algorithm uses three rankings/ratings of computer science conferences:

  • The CORE 2021 Conference Rating - Australians have a long-standing experience in ranking publication venues; CORE (the Computing Research and Education Association of Australia) has been developing its own rating of computer science conferences since 2008, first on its own, and then as part of the ERA research evaluation initiative. Despite the fact that the ERA rating effort has been discontinued, CORE has decided to keep its initiative, and is now regularly updating the conference rating. The rating inherits from previous versions, and uses a mix of bibliometric indexes and peer-review by a committee of experts to classify conferences in the A+ (top tier), A, B, C tiers (plus a number of Australasian and local conferences);
  • Microsoft Academic - Microsoft Academic is part of the Microsoft Knowledge API, and represents the Microsoft counterpart to the popular Google Scholar; it inherits from the original The Microsoft Academic Seach and provides an API to get bibliometric indicators about computer-science conferences and papers;
  • LiveSHINE - LiveSHINE is the successor to the SHINE Google-Scholar-Based Conference Ranking. The original SHINE (Simple H-Index Estimator) was a collection of software tools conceived to calculate the H-Index of computer science conferences, based on Google Scholar data. LiveSHINE is based on a plug-in for the Google Chrome browser. The plug-in allows administrators to browse the LiveSHINE conference database, and at the same time, triggers queries to Google Scholar to progressively update citation numbers. Unfortunately, LiveSHINE was discontinued at the end of 2018 when Google Scholar banned the bulk access to its information about conferences and cites.

We adopt these as the base data sources for our algorithm. The three represent a good mix of bibliometric and non-bibliometric approaches, and are backed by prominent international organizations, and large and authoritative data sources.


The Rating Algorithm

Our unified rating brings together the three sources listed above and tries to unify them according to an automatic algorithm initially adopted by GII and GRIN for their 2015 rating (see here for the original description of the algorithm). We summarize the algorithm below.

We refer to the following set of classes, in decreasing order: A++, A+, A, A-, B, B-, C. Our purpose is to classify conferences within four main tiers, as follows:

Tier Class Description
1 A++, A+ top notch conferences
2 A, A- very high-quality events
3 B, B- events of good quality
- Work in progress work in progress

Loading the Sources

Data at the sources were downloaded on June, 1st 2021. The collected data were used as follows:

The CORE 2021 Conference Rating was downloaded as it is by selecting "CORE 2021" as the only source of ratings (i.e., we discarded all previous ratings).

While the CORE Conference Rating comes as a set of classified venues, LiveSHINE and Microsoft Academic simply report a number of citation-based bibliometric indicators about conferences, especially the H-Index of the conference.

H-Indexes are usually considered as robust indicators. However, they suffer from a dimensionality issue: it is possible that conferences with a very high number of published papers have high H-Indexes, regardless of the actual quality of these papers. The opposite may happen for small conferences that publish less papers.

To reduce these distortions, using data available in LiveSHINE and Microsoft Academic, we computed a secondary indicator, called "average citations", obtained by dividing the total number of citations received by papers of the conference, by the total number of pulished papers. This is, in essence, a lifetime impact factor (IF) for the conference. IF-like indicators are based on the average, and are therefore sensible to the presence of outliers. This suggests that they should not be used as primary ranking indicators. However, they may help to correct distortions due to the dimensional nature of the H-Index.

To do this, we assigned a class to each Microsoft Academic and LiveSHINE conference using the following algorithm. In the following, we refer to the conference average citations as the "IF-like indicator".

  • to start, each Microsoft Academic/LiveSHINE conference receives two different class values:
    • a class wrt the H-index, as follows: conferences are sorted in decreasing order wrt the value of the H-index; then, classes are assigned as follows based on ranks:
      RanksClass
      ranks from 1 to 50rank A++
      ranks from 51 to 75rank A+
      ranks from 76 to 200rank A
      ranks from 201 to 250rank A-
      ranks from 251 to 575rank B
      ranks from 576 to 650rank B-
      rest of the itemsrank C
    • a class wrt the IF-like indicator, with the following thresholds:
      ValueClass
      25 or morerank A++
      23 to 25 (excl.)rank A+
      18 to 23 (excl.)rank A
      16 to 18 (excl.)rank A-
      12 to 16 (excl.)rank B
      10 to 12 (excl.)rank B-
      7 to 10 (excl.)rank C
      rest of the itemsrank D
  • at the end of this process, each conference in Microsoft Academic/LiveSHINE has two classes; we need to assign a final class to them. To do this, we consider as the primary class the one based on the H-index, and use the second one, based on the IF-like indicator, to correct the first, according to the following rules:
    Primary ClassSecondary ClassFinal Class
    A++ B, B-, C, or D A+
    A+ B-, C, or D A
    A C or D A-
    A- D B
    A A++ A+
    A- A++, A+, or A A
    B A++, A+, or A A-
    B- A++, A+, A, or A- B
    C A++, A+, A, or A- B-

Integration

Classified venues in the three base data sources are integrated in order to bring together all available classes for a single conference. After this step, each conference in the integrated rating received from one to three classifications, depending on the number of sources it appears within. In the case in which the same conference was ranked multiple times by a single source, the highest rating was taken.

Based on the collected ratings, a final class was assigned to each conference. The rules we used to do this follow the principles described below (in the following we assign integer scores to the classes as follows: A++=7, A+=6, A=5, A-=4, B=3, B-=2, C=1):

  • for conferences with three ratings: (a) a majority rule is followed: when at least two sources assign at least class X, then: if the third assigns X-1, then the final class is X; if the third assigns X-2 or X-3, then the final class is X-1; if the third assigns X-4 or lower, the final class is X-2; (b) then, if necessary, this assignment is corrected using a numerical rule: we assign an integer score to each conference by giving scores to classes, and then taking the sum; the numerical rule states that conferences with higher numerical scores cannot have a class that is lower than the one of a conference with a lower score;
  • for conferences with only two ratings: these conferences cannot be ranked higher than A; to assign the score, we assign a third "virtual" class, equal to the minimun one among the ones available; then, we follow the rules above;
  • for conferences with only one rating: these conferences are all considered not classifiable based on the data at our disposal.

This gives rise to the following class-assignment rules reporte below.

A Note on Publication Styles

In the last few years, several computer-science related conferences have started publishing their proceedings as special issues of well-known journals -- e.g., the ACM SIGGRAPH conference now publishes its proceedings as special issues of the ACM Transactions on Graphics (ACM TOG). Being based on bibliometric indicators, our algorithm cannot accomodate these cases: sources as LiveSHINE are unable to estimate the H-Index of the conference, since it does not represent a publication venue "per se". Notice that it would be possible to calculate the H-index of the hosting journal, but this is quite different from the one of the conference itself, since the journal also publishes research papers that are not related to the conference. Therefore, we have excluded these events from the rating. Please notice that ratings assigned in previous versions of the rating are still available on this site (see the menu above). In addition, papers published in these conferences can be evaluated anyway, since they receive the bibliometric indicators of the respective journal

A Note on Source Coverage

In 2017 we introduced a rule to handle the reduction in coverage in two of our base sources (LiveSHINE and Microsoft Academic) wrt 2015 (see the 2017 conference rating description page for details). In 2018, Microsoft Academic's coverage is back to the one of Microsoft Academic Search in 2015 (2000 events). As a consequence, we applied the coverage rule only for LiveSHINE. More specifically, we added back the same old 2015 SHINE ratings as in 2017.

Class-Assignment Rules

Original ClassesFinal Class
A++, A++, A++A++
A++, A++A
A++, A++, A+A++
A++, A++, AA+
A++, A+, A+A+
A++, A+A
A++, A++, A-A+
A++, A+, AA+
A++, B, BA-
A++, C, CB
A++, B, B-A-
A++, A-, A-A
A++, A++, BA
A++, A+, A-A
A++, A, AA
A++, AA
A++, A++, B-A
A++, A+, BA
A++, A, A-A
A+, B, CB
A+, B, BA-
A+, BA-
A+, A+, A+A+
A+, A+A
A+, A+, AA+
A+, A+, A-A
A+, A, AA
A+, AA
A++, A++, CA
A+, A+, BA
A+, A, A-A
A, A, AA
A, AA
A++, A+, B-A
A++, A, BA
A++, A-, A-A
A++, A-A
A, A, A-A
A++, A+, CA-
A++, A, B-A-
A++, A-, BA-
A+, A+, B-A-
A+, A, BA-
A+, A-, A-A-
A+, A-A-
A+, A+, CA-
A+, A, B-A-
A+, A-, BA-
A, A, BA-
A, A-, A-A-
A, A-A-
A++, A, CA-
A++, A-, B-A-
A++, B, BA-
A++, BA-
A, A, B-A-
A, A-, BA-
A-, A-, A-A-
A-, A-A-
A++, A-, CA-
A++, B, B-A-
A+, A, CA-
A+, A-, B-A-
A+, B, BA-
A+, BA-
A-, A-, BA-
A+, A-, CB
A+, B, B-B
A, A, CB
A, A-, B-B
A, B, BB
A, BB
A++, B, CB
A++, B-, B-B
A++, B-B
A, A-, CB
A, B, B-B
A-, A-, B-B
A-, B, BB
A-, BB
A++, B-, CB
A+, B, CB
A+, B-, B-B
A+, B-B
A-, A-, CB
A-, B, B-B
B, B, BB
B, BB
A+, B-, CB
A, B, CB
A, B-, B-B
A, B-B
A++, C, CB
A++, CB
B, B, B-B
A, B-, CB-
A-, B, CB-
A-, B-, B-B-
A-, B-B-
A+, C, CB-
A+, CB-
A-, B-, CB-
B, B, CB-
B, B-, B-B-
B, B-B-
A, C, CB-
A, CB-
B, B-, CB-
B-, B-, B-B-
B-, B-B-
A-, C, CB-
A-, CB-
B-, B-, CB-
B, C, CWork in Progress
B, CWork in Progress
B-, C, CWork in Progress
B-, CWork in Progress
B, NCWork in Progress
C, C, CWork in Progress
C, C, NCWork in Progress
B, C, NCWork in Progress
C, NCWork in Progress
C, CWork in Progress
AWork in Progress
A+Work in Progress
A++Work in Progress
A-Work in Progress
BWork in Progress
B-Work in Progress
CWork in Progress
NCWork in Progress