Around the world, whenever university leaders meet, the shadow of rankings loom. Rankings are important because, as the Ministry of Education points out, people think they are important. Everyone laments the influence of rankings, but no one can opt out – because rankings companies will assess a university whether they like it or not.  And no university wants to opt out anyway, in case it affects their ability to recruit students or to win research contracts.

In New Zealand, pirouetting on the rankings pinhead has become a ritual for leaders of our university system. Whenever a university drops in a ranking, we are told that the rankings are flawed and that our system is actually of high quality and if only they were funded better, our universities would be ranked higher.  But when a university goes up in the rankings, this shows what a high-quality system we have and what fine staff the university has, despite the government’s parsimony.

So why do these rankings generate so much attention?  Are they really as unreliable as their detractors allege?  Is there anything that we can learn from the rankings?

Rankings are big business …

Rankings started in 2003 when the Institute of Higher Education of Shanghai Jiao Tong University developed a system designed to assess the research standing of Chinese universities by rating them against the world’s best research universities.

A year later, the British weekly Times Higher Education (THE) engaged a firm called Quacquerelli Symonds (QS) to construct a ranking system that put proxy measures of universities’ teaching and internationalisation alongside research performance indicators. Then, when the relationship between THE and QS broke down in 2009, they went their separate ways, each producing their own competing (but similar) rankings.

Then in 2015, the US News and World Report – which had for years ranked US universities to help prospective American students decide where to study – extended its reach by publishing a global universities ranking.

Shanghai. THE. QS. US News. These are the most widely read systems.  All rely on a basket of indicators and weight those indicators to create a composite score.  That’s one of the main criticisms of these rankings systems – change the weightings and you change the overall score.  Those weightings are subjective – they are based on the opinions of experts.  Choose another expert and the weightings (and therefore, the rankings) would change.

The CWTS Leiden ranking, run by the Universiteit Leiden in the Netherlands, solves this problem by not aggregating its measures to create an overall ranking.  Rather, there is a ranking for each indicator.  As a consequence, the media ignore the Leiden ranking, despite its integrity and robustness.

THE, QS and US News also rely on survey data for some of their measures, surveys of academics or employers who are asked to rate 1000-plus universities so that the rankers can construct measures of reputation.  Those surveys are highly problematic.  They are based on perception, not performance.  They are often based on out of date experience.  They tend to reinforce the status quo – MIT, Harvard, Oxford and Cambridge are brands we have all heard of; Waikato, Lincoln and AUT struggle for visibility in a cluttered world university scene.

The Shanghai and Leiden rankings are different.  Both are narrowly focused on research.  The Shanghai ranking seeks to identify universities that represent the largest clusters of research critical mass.  No surveys.  None of the flaky proxy measures of teaching quality that THE and QS rely on. Crucially, it looks at the volume of research and the quality of the research produced.  So, it has a bias towards the larger universities. Only one of its six indicators has a focus on university performance.

Leiden calculates a range of research performance measures, for instance, citation scores, and the proportion of papers published in the top one, ten and 50 per cent of the journals in the field. And a set of collaboration measures – for instance, the proportion of papers that involve collaboration with industry-based researchers.

What is the value of the rankings?

Despite the flaws, rankings data provides one way we can get an understanding of how our university system measures up.

For us, the obvious point of reference is the Australian university system which has much in common with ours.  And, like us, Australia is a long way from the centres of influence in the northern hemisphere.

Table 1 sets out the highest ranked university in each country in the latest edition of each of the four biggest rankings and the proportion of the universities in each country that are:

  • in the top 100 universities
  • in the top 500
  • in the top 800.

Table 1: Performance in the most recent releases in selected university rankings systems – Australian and New Zealand university systems – highest ranked university and the proportion of each country’s universities that are ranked

Source: Ranking system websites. Note, NZ has 8 universities and Australia 43.

What is obvious is that New Zealand doesn’t have an elite, leading university like those in the Australian Group of 8 (G8) universities. All but one of the G8 universities appears in the top 100 in at least one ranking.  Six of the G8 are ranked in the top 100 in all four of those rankings.

On the other hand, New Zealand has all of its universities in the top 500 of QS and the top 800 in THE, and seven of the eight in each of Shanghai and USNews.  This confirms what we already knew – we have a homogeneous system, without any elite institutions but where each university meets a reasonable standard.  Australia has a more stratified university system – a small group of top 100 universities but a number that don’t meet ranking criteria at all.

The UK system is even more stratified – Table 2 shows the same data for the QS and THE rankings, but with the UK also included.

Table 2: Performance in the most recent releases in the QS and THE rankings – Australian, UK and New Zealand university systems– highest ranked university and the proportion of each country’s universities that are ranked

Source: Ranking system websites. Note that the UK has 148 universities, NZ has 8 and Australia 43.

The UK has a lower proportion of its universities in the top 800 than either Australia or New Zealand, but, like Australia and unlike NZ, it has a set of elite universities. Unlike Australia, the top UK universities are really elite: top 10.

Some people – especially those working for rankings companies – argue that having at least one elite/top 100 university in each of the major rankings has a benefit to the country’s whole system.  If that argument were so, then New Zealand is a long way off. On the other hand, I doubt that AUT would agree that they would be better off if only their neighbours at the University of Auckland were ranked higher (subtext: funded at a higher rate).  Any more than Northern Kentucky University gains extra prestige, bigger research grants and extra international students from being in the same country as Harvard and MIT.

Research capability and research performance – the Shanghai rankings

The Shanghai ranking uses only independent, quantitative and verifiable data and it is narrowly focused on research capability.  Digging into the Shanghai data reinforces the point about the homogeneity of the NZ university system.

Shanghai rates the University of Auckland at 239th, top among the New Zealand universities, more than 100 places ahead of the next New Zealand university (Otago, 348th).  This means that, on the Shanghai criteria, Auckland has the largest cluster of excellent researchers in the New Zealand university system.  That is no particular surprise: the University of Auckland has more than twice the research revenue of the next highest university; in the most recent Performance-Based Research Fund (PBRF) assessment, its researchers represented around a quarter of all the academics in the tertiary system whose research reached a standard to earn PBRF funding and a third of those assigned the highest “A” score.

But having the largest cluster of researchers isn’t the same as being the top performer.

As one part of its ranking, Shanghai calculates a per capita performance, which scales the scores on the other indicators to the size of the institution.  On that measure, Lincoln has the highest score of the New Zealand universities (24.9, 164th in the world), against Auckland’s 21.1, 292nd in the world.  In fact, Canterbury (230th) and Otago (285th) also outpoint Auckland on per capita performance.

While Auckland’s research income is nine times Lincoln’s, Lincoln gains a higher proportion of its income from research than any other university in New Zealand – 33 per cent, as opposed to 31 per cent at Auckland and a university sector average of 22 per cent.  When we look at research income from external research contracts, Lincoln again outperforms Auckland – 20 per cent of all Lincoln’s revenue is from research contracts, compared with 15 per cent at Auckland.  So Lincoln’s high PCP score isn’t a real surprise.

Research performance – the Leiden rankings

Leiden only calculates its indicators for universities with large research output.  This means it ranks only five New Zealand universities – omitting Waikato, AUT and Lincoln.

One of the most important research quality indicators Leiden calculates is the “mean normalised citation score” or MNCS, which is a measure of the average number of times a research paper has been cited by other researchers, compared to the average for all the researchers in that field.  Because it’s a relative measure, MNCS controls for differences in citation conventions between different fields of research, and for different time periods.  This is a robust measure of the academic impact of research.

The graph below gives the MNCS for two time periods for the five NZ universities ranked by Leiden and, as a point of reference, the University of Adelaide, the Australian G8 university with the most modest placings across most of the ranking systems.

Figure 1:  Mean normalised citation score (MNCS), 2006-09 and 2012-15, selected Australasian universities

Source: CWTS Leiden rankings, June 2017.
Note: A MNCS score of 1.0 is the average for the 903 universities whose data are analysed by CWTS Leiden. A score greater than 1 represents better performance than the average, while a score of less than 1 is below the world average.

Auckland, the best of the New Zealand universities on this measure ranked 259th of the 903 universities in the analysis for the period 2002-2015 and 14th of 30 in Australasia.  Otago ranked 289th in the world and 17th in Australasia is also above the world average.  The lowest scoring New Zealand university, Canterbury, comes in at 507th in the world.

Of the five New Zealand universities in the study, three are a bit above the median and two are slightly below.  All 25 of the Australian universities analysed by Leiden are in the top half.

Of course, the results differ by field of research.  For instance, in 2012-2015, Victoria rated highest among the NZ universities in Social sciences and humanities and in Mathematics and computer science, Otago was first in Physical sciences and in Life and earth sciences while Auckland ranked highest in NZ in Biomedical and health sciences.

This analysis confirms that, on this (robust) measure of research performance, there is no New Zealand university among the elite but that the system performs reasonably well.

This is not to challenge the University of Auckland’s pre-eminence in research among the New Zealand universities – rather, the point is that we don’t have a highly stratified university system.  Despite the differences in focus, character and size of universities, the data shows that they all perform at a reasonable level.  We may not have an elite university, but the homogeneity of the system is an asset, one that is worth retaining.


Roger Smyth has 30 years’ experience working in tertiary education – initially in senior management in a university and later in the Ministry of Education. At the Ministry, he managed the Tertiary Sector Performance Analysis team and then took over as Group Manager, Tertiary Education Policy. He retired from the Ministry in April last year and now works as an independent adviser on and contractor in tertiary education.


Please enter your comment!
Please enter your name here