The GII-GRIN Conference Rating 2015


Download The GII-GRIN Conference Rating 2015 (Excel .xlsx file) - last updated: January, 24th 2015

Additional materials

Goals

Conference papers are important to computer scientists. Research evaluation is important to Universities and policy makers. This initiative is sponsored by GII (Group of Italian Professors of Computer Engineering) and GRIN (Group of Italian Professors of Computer Science). The goal of this initiative is to develop a unified rating of computer science conferences. The process is organized in two stages.

  • Stage 1: a joint committee of GII and GRIN members (see below) was asked to put together a rating algorithm based on well-known, existing international classifications.
  • Stage 2: the rating generated by the algorithm will be submitted to the two communities (GII and GRIN), to be revised and corrected based on their feedbacks.

This site reports the result of Stage 1 of the process.

The Stage 1 GII-GRIN Joint Committee

  • Rita Cucchiara (GII)
  • Gerardo Canfora (GII)
  • Giansalvatore Mecca (GII)
  • Stefano Paraboschi (GII)
  • Vincenzo Piuri (GII)
  • Pierangela Samarati (GRIN)
  • Carlo Blundo (GRIN)
  • Luca Chittaro (GRIN)
  • Alessandro Mei (GRIN)
  • Davide Sangiorgi (GRIN)

Disclaimer

We realize that using bibliometric indicators may introduce distortions in the evaluation of scientific papers. We also know that the source rankings may have flaws and contain errors. It is therefore unavoidable that the unified rating that we publish in turn contains errors or omissions. Our goal was to limit these errors to the minimum, by leveraging all of the indicators that were available at the sources, and by combining them in such a way to reduce distortions. We expect that in the majority of cases our algorithm classifies conferences in a way that reflecs quite closely the consideration of that conference within the international scientific community. There might be cases in which this is not true, and these may be handled in Stage 2 using public feedback from the community.


Changelog

  • March, 1st 2015 - added references to a collection of comments to this proposal sent to the Joint Committee by GII and GRIN members, and to the response to these comments prepared by the Committee;
  • January, 24 2015 - new version of the rating (Jan-24), changed to address comments made in the joint meeting of November, 7 2014; more specifically:
    • to make tier 3 (B, B- conferences) larger, the thresholds for H-like indexes in MAS and SHINE were increased;
    • based on comments from colleagues, a few of the three-classes-to-one translation rules have been fixed to remove inconsistencies wrt the translation algorithm described below;
    • based on comments from colleagues, a few entity-resolution errors have been fixed (by merging records with different names/acronyms that represent the same event).
  • October, 30 2014 - first version of the rating (Oct-16)

The Rating Algorithm

The Sources

Recently three rankings/ratings of computer science conferences have emerged:

  • The CORE 2013 Conference Rating - Australians have a long-standing experience in ranking publication venues; CORE (the Computing Research and Education Association of Australia) has been developing its own rating of computer science conferences since 2008, first on its own, and then as part of the ERA research evaluation initiative. Despite the fact that the ERA rating effort has been discontinued, CORE has decided to keep its initiative, and is now giving access to early versions of the 2013 conference rating; the rating inherits from previous versions, and uses a mix of bibliometric indexes and peer-review by a committee of experts to classify conferences in the A+ (top tier), A, B, C tiers (plus a number of Australasian and local conferences);
  • The Microsoft Academic Research Conference Ranking - Microsoft Academic Research is the Microsoft counterpart to the popular Google Scholar; it publishes a ranking of computer science conferences that is automatically generated based on its database of publication; the ranking is based on the field rating of a conference, that is essentially the H-Index of the conference within the field;
  • The SHINE Google-Scholar-Based Conference Ranking - SHINE (Simple H-Index Estimator) is a collection of software tools conceived to calculate the H-Index of computer science conferences, based on Google Scholar data. SHINE ranks conferences based on their H-Index.

In this project, we shall primarily concentrate on the three sources listed above. These represent a good mix of bibliometric and non-bibliometric approaches, and are backed by prominent international organizations and large and authoritative data sources.

Our unified rating brings together the three sources listed above and tries to unify them.

We refer to the following set of classes, in decreasing order: A++, A+, A, A-, B, B-, C. Our purpose is to classify conferences within four main tiers, as follows:

Tier Class Description
1 A++, A+ top notch conferences
2 A, A- very high-quality events
3 B, B- events of good quality
- W work in progress

Loading the Sources

Data at the sources were downloaded on September 1st, 2014. The collected data were used as follows:

The CORE 2013 Conference Rating was downloaded as it is by selecting "CORE 2013" as the only source (i.e., we discarded CORE2008 and ERA2010 ratings); in addition, conferences with rank "Australasian" or "L" (local) were removed, for a total of 1700 classified venues; the distribution of tiers is as follows (as of January, 15):

A+mapped to A++65 conferences
A mapped to A252 conferences
B mapped to B431 conferences
C mapped to C874 conferences

While the CORE Conference Rating comes as a set of classified venues, SHINE and Microsoft Academic Search simply report a number of bibliometric indicators about conferences, and rank them according to the main one of these that is essentially the H-Index of the conference (it is called "field rating" in MAS, but the semantics is essentially the same.)

H-Indexes are usually considered as robust indicators. However, they suffer from a dimensionality issue: it is possible that conferences with a very high number of published papers have high H-Indexes, regardless of the actual quality of these papers. The opposite may happen for small conferences that publish less papers.

To reduce these distortions, using data available on SHINE and MAS, we computed a secondary indicator, called "average citations", obtained by dividing the total number of citations received by papers of the conference, by the total number of pulished papers. This is, in essence, a lifetime impact factor (IF) for the conference. IF-like indicators are based on the average, and are therefore sensible to the presence of outliers. This suggests that they should not be used as primary ranking indicators. However, they may help to correct distortions due to the dimensional nature of the H-Index.

To do this, we assigned a class to each MAS and SHINE conference using the following algorithm (the treatment of the two sources is the same, and therefore we report it only once). In the following, we refer to the SHINE H-Index and to the MAS field Rating as "H-like indicators", and to the conference average citations as "IF-like indicators".

  • to start, each MAS/SHINE conference receives two different class values:
    • a class wrt the H-like indicator, as follows: conferences are sorted in decreasing order wrt the H-like indicator; then, classes are assigned as follows based on ranks:
      RanksClass
      ranks from 1 to 50rank A++
      ranks from 51 to 75rank A+
      ranks from 76 to 200rank A
      ranks from 201 to 250rank A-
      ranks from 251 to 575rank B
      ranks from 576 to 650rank B-
      rest of the itemsrank C
    • a class wrt the IF-like indicator, with the following thresholds:
      ValueClass
      25 or morerank A++
      23 to 25 (excl.)rank A+
      18 to 23 (excl.)rank A
      16 to 18 (excl.)rank A-
      12 to 16 (excl.)rank B
      10 to 12 (excl.)rank B-
      7 to 10 (excl.)rank C
      rest of the itemsrank D
  • at the end of this process, each conference in MAS has two classes, and each conference in SHINE the same; we need to assign a final class to them. To do this, we consider as the primary class the one based on the H-like indicator, and use the second one, based on the IF-like indicator, to correct the first, according to the following rules:
    Primary ClassSecondary ClassFinal Class
    A++ B, B-, C, or D A+
    A+ B-, C, or D A
    A C or D A-
    A- D B
    A A++ A+
    A- A++, A+, or A A
    B A++, A+, or A A-
    B- A++, A+, A, or A- B
    C A++, A+, A, or A- B-

Integration

Classified venues in the three sources were integrated in order to bring together all available classes for a single conference. After this step, each conference in the integrated rating received from one to three classifications, depending on the number of sources it appears within. In the case in which the same conference was ranked multiple times by a single source, the highest rating was taken.

Based on the collected ratings, a final class was assigned to each conference. These rules we used to do this follow the principles described below (in the following we assign integer scores to the classes as follows: A++=7, A+=6, A=5, A-=4, B=3, B-=2, C=1):

  • for conferences with three ratings: (a) a majority rule is followed: when at least two sources assign at least class X, then: if the third assigns X-1, then the final class is X; if the third assigns X-2 or X-3, then the final class is X-1; if the third assigns X-4 or lower, the final class is X-2; (b) then, if necessary, this assignment is corrected using a numerical rule: we assign an integer score to each conference by giving scores to classes, and then taking the sum; the numerical rule states that conferences with higher numerical scores cannot have a class that is lower than the one of a conference with a lower score;
  • for conferences with only two ratings: these conferences cannot be ranked higher than A; to assign the score, we assign a third "virtual" class, equal to the minimun one among the ones available; then, we follow the rules above;
  • for conferences with only one rating: these conferences are all considered not classifiable based on the data at our disposal.

This gives rise to the following class-assignment rules:

Original ClassesFinal Class
A++, A++, A++A++
A++, A++, A+A++
A++, A++, AA+
A++, A+, A+A+
A++, A+, AA+
A+, A+A
A+, A+, AA+
A++, A++, BA
A++, A+, A-A
A++, A, AA
A++, A+, BA
A++, A, A-A
A+, A, AA
A+, AA
A+, A+, BA
A+, A, A-A
A, A, AA
A, AA
A++, A, BA
A++, A-A
A, A, A-A
A++, A+, CA-
A++, A, B-A-
A++, A-, BA-
A+, A, BA-
A+, A-A-
A+, A+, CA-
A+, A, B-A-
A+, A-, BA-
A, A, BA-
A, A-, A-A-
A, A-A-
A++, A, CA-
A++, BA-
A, A-, BA-
A-, A-A-
A++, A-, CA-
A+, A, CA-
A-, A-, BA-
A+, A-, CB
A, A, CB
A, A-, B-B
A, B, BB
A, BB
A++, B, CB
A, A-, CB
A, B, B-B
A-, B, BB
A-, BB
A-, A-, CB
A-, B, B-B
B, B, BB
B, BB
A, B, CB
A, B-, B-B
A, B-B
B, B, B-B
A, B-, CB-
A-, B, CB-
A-, B-B-
A+, CB-
B, B, CB-
B, B-, B-B-
B, B-B-
A, C, CB-
A, CB-
B, B-, CB-
B-, B-B-
B-, B-B-
A-, B-, CB-
A-, C, CB-
A-, CB-
B-, B-, CB-
B-, B-, CB-
B, C, CW
B, CW
B, CW
B-, C, CW
B-, C, CW
B-, CW
B-, CW
C, C, CW
C, C, CW
C, CW
C, CW
AW
A+W
A++W
A-W
BW
BW
B-W
B-W
CW
CW

Other Ratings

In addition to the ones listed above, several other ratings of computer science conferences have been developed in the last few years. Some of these have been put together by single researchers. Others are larger in scope, but were discontinued at some point, and are now quite outdated: