Around the world, whenever university leaders meet, the shadow of rankings loom. Rankings are important because, as the Ministry of Education points out, people think they are important. Everyone laments the influence of rankings, but no one can opt out – because rankings companies will assess a university whether they like it or not. And no university wants to opt out anyway, in case it affects their ability to recruit students or win research contracts.
In New Zealand, pirouetting on the rankings pinhead has become a ritual for leaders of our university system. Whenever a university drops in a ranking, we are told that the rankings are flawed and that our system is actually of high quality and if only they were funded better, our universities would be ranked higher. But when a university goes up in the rankings, this shows what a high-quality system we have and what fine staff the university has, despite the government’s parsimony.
So why do these rankings generate so much attention? Are they really as unreliable as their detractors allege? Is there anything that we can learn from the rankings?
Rankings are big business
Rankings started in 2003 when the Institute of Higher Education of Shanghai Jiao Tong University developed a system designed to assess the research standing of Chinese universities by rating them against the world’s best research universities.
A year later, the British weekly Times Higher Education (THE) engaged a firm called Quacquerelli Symonds (QS) to construct a ranking system that put proxy measures of universities’ teaching and internationalisation alongside research performance indicators. Then, when the relationship between THE and QS broke down in 2009, they went their separate ways, each producing their own competing (but similar) rankings.
Then in 2015, the US News and World Report – which had for years ranked US universities to help prospective American students decide where to study – extended its reach by publishing a global universities ranking.
Shanghai, THE, QS, and US News are the most widely read systems. All rely on a basket of indicators and weight those indicators to create a composite score. That’s one of the main criticisms of these rankings systems – change the weightings and you change the overall score. Those weightings are subjective – they are based on the opinions of experts. Choose another expert and the weightings (and therefore, the rankings) would change.
The CWTS Leiden ranking, run by the Universiteit Leiden in the Netherlands, solves this problem by not aggregating its measures to create an overall ranking. Rather, there is a ranking for each indicator. As a consequence, the media ignore the Leiden ranking, despite its integrity and robustness. THE, QS and US News also rely on survey data for some of their measures, surveys of academics or employers who are asked to rate 1000-plus universities so that the rankers can construct measures of reputation. Those surveys are highly problematic. They are based on perception, not performance. They are often based on out of date experience. They tend to reinforce the status quo – MIT, Harvard, Oxford and Cambridge are brands we have all heard of; Waikato, Lincoln and AUT struggle for visibility in a cluttered world-university scene.
The Shanghai and Leiden rankings are different. Both are narrowly focused on research. The Shanghai ranking seeks to identify universities that represent the largest clusters of research critical mass. No surveys. None of the flaky proxy measures of teaching quality that THE and QS rely on. Crucially, it looks at the volume of research and the quality of the research produced. So it has a bias towards the larger universities. Only one of its six indicators has a focus on university performance.
Leiden calculates a range of research performance measures; for instance, citation scores, and the proportion of papers published in the top one,
10 and 50 percent of the journals in the field. And a set of collaboration measures – for instance, the proportion of papers that involve collaboration with industry-based researchers.
What is the value of the rankings?
Despite the flaws, rankings data provides one way we can get an understanding of how our university system measures up. For us, the obvious point of reference is the Australian university system which has much in common with ours. And, like us, Australia is a
long way from the centres of influence in the northern hemisphere.
What is obvious is that New Zealand doesn’t have an elite, leading university like those in the Australian Group of 8 (G8) universities. All but one of the G8 universities appears in the top 100 in at least one ranking. Six of the G8 are ranked in the top 100 in all four of those rankings.
On the other hand, New Zealand has all of its universities in the top 500 of QS and the top 800 in THE, and seven of the eight in each of Shanghai and US News. This confirms what we already knew – we have a homogeneous system, without any elite institutions but where each university meets a reasonable standard. Australia has a more stratified university system – a small group of top 100 universities but a number that don’t meet ranking criteria at all.
The UK has a lower proportion of its universities in the top 800 than either Australia or New Zealand, but, like Australia and unlike NZ, it has a set of elite universities. Unlike Australia, the top UK universities are really elite: top 10.
Some people – especially those working for rankings companies – argue that having at least one elite/top 100 university in each of the major rankings has a benefit to the country’s whole system. If that argument were so, then New Zealand is a long way off. On the other hand, I doubt that AUT would agree that they would be better off if only their neighbours at the University of Auckland were ranked higher (subtext: funded at a higher rate). Any more than Northern Kentucky University gains extra prestige, bigger research grants and extra international students from being in the same country as Harvard and MIT.
Kiwi universities hold their own
Five of New Zealand’s eight universities have increased their overall rankings in the latest QS world university rankings, but Universities New Zealand says more funding is urgently needed to maintain their performance.
New Zealand universities continue to do well in international rankings, with the release of the 2020 QS world rankings showing that all eight universities remain in the world’s top 500.
“Every university has seen an improvement in its academic reputation internationally,” says Universities New Zealand chief executive Chris Whelan. “This is a survey of academics who are asked to list which universities outside their home country rate as leading in their particular field of study.
“Nearly all universities have seen their employer reputation score improve, where employers are asked to rate the quality of graduates from their nearby universities.
“Most New Zealand universities, however, are suffering the effects of a long-term real drop in funding per student, with all but one university showing a drop in staff:student ratios.”
Resource issues in other areas are leading to a drop in citation rates per staff member at five universities.
“Despite this, the combined scores of all New Zealand universities have risen slightly—showing that New Zealand’s university system remains strong, despite challenges around resourcing,” says Chris Whelan.
Five universities have increased their overall ranking this year and three have decreased, but the changes are mostly minor and due as much to variations in how surveys are answered as to long-run resourcing challenges.
“The international education environment remains extraordinarily competitive, however, with many overseas governments spending billions of dollars to get flagship universities into the list of top 100 universities,” says Whelan.
“Without some real growth in funding and resources, New Zealand runs the risk of being squeezed out of having a place on that list.”