Another university ranking?  Really? Haven’t we got too many of those daft rankings systems already?

Yes. Far too many.  QS, THE, Shanghai, U-Multirank, US News and World Report, Leiden – so many variations on a common theme, too many attempts to do nearly the same thing.

Rankings have dominated the public’s view of universities for the last decade or more.  Everyone inside the system accepts that the results are spurious.  But, when the public, when international students, when governments, all hunger for the simplicity of league tables, universities opt out of the race for higher rankings at their peril.

But the Times Higher Education (THE) university impact rankings, released for the first time on 3 April, is different.  This new ranking system is a serious attempt to create a more meaningful listing.  THE has tried to measure universities on what matters, rather than on what can be counted.

THE’s impact rankings are aligned to 11 of the UN’s 17 sustainable development goals.  These goals – no poverty, zero hunger, good health and wellbeing, quality education etc – matter to us as a nation.  These goals matter to us as a species.  They need to matter to a country’s institutions, including universities.  The impact rankings set out to examine the extent to which, in its policies, practices and outcomes, a university seeks to advance those goals.

But there may be a problem here.  There is a tension between what matters and what can be measured.  THE’s new impact rankings assess the alignment of a university’s work and practices – for instance, its research programme – to the sustainable development goals.  THE looks at a university’s policies, rather than the extent to which those policies translate into actions.  For the most part, they rely on the university’s own submissions and data, rather than on verified, independently-produced data.  THE scored each university that entered the race on each of the goals.  Then, they counted the aggregate for each university on its best four.  We can have confidence in THE – we can have confidence that they will have done their assessments well and with integrity.  But only in as much as the data is robust.

For a contrast, look at the Leiden rankings.  Leiden represents the gold standard among the rankings systems.  They restrict themselves to robust, verifiable data – but that means they end up restricting themselves to measures of research performance.  They disclose all their data so that people can assess it (whereas the commercial rankers withhold the detailed data so that they can sell it to institutions desperate to advance their standing).  Unlike US News, QS, THE and Shanghai, the experts at Leiden don’t add up unrelated items to create headline-grabbing league tables.  Rather, readers can assess performance on a range of different measures – publication numbers, publications by journal quality, citations in aggregate, citations per paper, citations normalised by field of study, in aggregate and per paper …  But how meaningful is that?

Universities must do research and they must do the best research they can.  But they have other roles too – they build human capital, they prepare people to participate in the labour market, they act as a repository of expertise, they contribute to a country’s culture and to the national debate on political and strategic direction.  Their value to a society goes beyond what is readily quantifiable.  Leiden may have the high ground when it comes to integrity, measurability and openness.  But they tell us about only one of a university’s many roles in society.

That’s why these new THE impact rankings are worthy of our attention. The factors the impact rankings seek to consider may be hard to measure.   But the focus is on what matters.

Of course, the winner of the first heat in this new rankings race was the University of Auckland.  Auckland was number 1 out of 551 universities from 80 countries, having topped two of the 12 lists that make up the rankings (good health and well-being, and partnerships for the goals).  AUT came in at 16th and Massey, the only other NZ entrant, was 38th.  All three results were tremendous.

The countries that had the highest average scores were Canada, Ireland and Australia.  The US, usually dominant in university rankings systems, were more modest performers – with their top (North Carolina at Chapel Hill) at 13th and only eight of their 31 entries featuring in the top 100, (compared with their 41 top 100 entries in the standard THE rankings and 31 in the QS top 100).   The only university with a top ten ranking in QS or the standard THE rankings which participated in the impact rankings was University College London, 10th in the QS list but ranked in the 101-200 group in the impact rankings.

There is a risk that the decision of those institutions that dominate the best-known ranking systems – Cambridge, MIT, Harvard, Oxford, Stanford and their like – to avoid this first round of the impact rankings may erode the standing of this new system for now.  But it’s likely that, over time, as the methodology settles down, and as the data becomes more robust, more universities will join in.  That will be positive.

Of course, all rankings systems are a bit daft.  But some are more daft than others.  This new ranking isn’t free from daftness, but, at least, it tries to do something useful.

Advertisement

1 COMMENT

  1. Great article by my former colleague Roger. Rankings seem to be simply used to boost PR and attract overseas students. Most seem to be based on a narrow range of factors. It’s the same in the compulsory sector.
    Malcolm Bell

LEAVE A REPLY

Please enter your comment!
Please enter your name here