The New Zealand Initiative’s latest attempt to influence education policy in Aotearoa is an egregiously complicated ‘solution’ to a problem that doesn’t really exist.

As has become their modus operandi over the past few years there is some useful background provided in ‘Score’ which shows that some NCEA Qualifications are not the same as others and some students’ NCEAs may not have the sort of coherence – or even sufficient foundation – to point them off into a clear post-school pathway. This is not news to those within the education system.

But the ‘solution’ they propose to this issue misses the mark.

It does not help to improve coherence of programmes to simply identify which standards are more likely to be taken by more able students. Many ‘difficult’ standards may still form incoherent whole while a collection of easier standards may well form a coherent basis from which to start a student’s working or tertiary career.

Another problem is that NZI’s equation is retrospective – that is, as described it will have no real benefit for the students themselves because they’ll find out their relative rank after they’ve already undertaken all their assessments for the year. It is also unpredictable because there’s considerable fluctuation in the specific standards offered by schools and teachers, the year on year grade distribution of individual standards, the nature of the cohort of students sitting a particular standard each year, and changes to the delivery and assessment of standards each year.   There would be almost no relativity between the WRPI scores of students in different years.  It also fails to account for the fact that both students and teachers look at the distribution of results in a given standard in the previous few years to help them decide whether to attempt/offer that standard.  That can change the nature of the cohort attempting a standard quite radically.

The NZ Initiative does concede these problems on p.25, where they say “One important deficiency remains.  If students of different ability select different courses, WRPI and other measures would mask very real differences in performance.”

In fact the WRPI has echoes of the problems of scaling that underpinned the qualifications we had last century, which upheld a hierarchy of subjects by scaling students up in those favoured by the academic world but would scale the same students down in subjects that were considered non-academic. What it did do though was give us a nice simple (simplistic?) set of numbers by which we could rank students and which we also pretended meant something between years. Maybe it is a yearning for these olden days that is hidden in NZI’s algebraic equations.

Something the New Zealand Initiative do get right is acknowledging that a key requirement of a good qualification system is that it is easy to understand for parents, students, teachers and employers, both in terms of the grades/qualification outcomes results it provides and in terms of the information available to those selecting from within the options available to them for assessment.





is unlikely to reflect a simple answer of why two students with similar credit accumulation end up with scores of 35 and 20.

Addressing the problems of differences in difficulty between standards needs a front-loaded solution which is clear when students are selecting options. If there were a clear track into say, medicine, with certain learning programmes and a prerequisite set of achievement standards expected as foundation skills for entry,  that would be much more use to students and schools than this proposal.

Jack Boyle is president of the Post-Primary Teachers’ Association.

Want more of the latest sector news, information, opinion and discussion straight to your inbox? Subscribe to our free weekly newsletters now:


  1. Jack’s right that we point to that deficiency in our measure, but we also point to a pretty easy way of fixing it – just one that we didn’t have a chance to get to in the datalab this time.

    The solution’s as follows. For each standard, get the average grade received by the students taking that standard for every other standard they take – excluding the one we’re looking at. Then compare it with the grades awarded in the standard in question. If on average students have the same score in this standard as they do in others, then the standard has an average difficulty. If on average students do better in this standard than they do in others, then it’s rated as easier. And if on average the scores awarded in this standard are lower than the average score students taking the standard get on their other standards, then it’s hard. You use that to create a difficulty score to use in weighting the percentile scores in each standard in the measure.


Please enter your comment!
Please enter your name here