benchmarking academia

Malcolm Gladwell has an interesting article about college education rankings in the current New Yorker. The problem with college rankings, he argues, is that they try to accomplish too much at the same time: rank very different institutions along multiple dimensions. The effect is that decisions about the weighting of the different factors lead to very different rankings, and it is the interests or ideological biases of those who conduct the study and determine the weights.

The recent NRC study on the quality of doctoral programs in the US tried various ways to avoid these pitfalls, by asking faculty which criteria they think are most relevant, by allowing users of the results to determine their own weightings and explore the data, and by adding error bars to their rankings, a step away from the dubious exercise of assigning a single number to measure quality. But this also had the result that the data is unwieldy and more difficult to interpret (rather than having a particular rank, the department’s rank along a certain dimension is an interval, somewhere between 25 and 5). There have also been critiques that the release of the data took so long that they’re already out of date, and also that some of the results are based on inaccurate data or are non-sensical (see here for some discussion, and here if you want to play with the linguistics rankings–of course, Canadian universities do not figure into these stats).

Similar issues arise when assessing the quality of a university more generally, not just as place to receive a college or doctoral degree, which becomes important when trying to assure that the means are allocated in the best possible way. A recent article in the New York review of books on the research assessment exercise in Britain paints a rather gloomy picture of the attempts of quantifying academic value. The danger is that rather than measuring quality they just change the way in which people act in order to meet the superficial criteria of the research exercise.

And yet there is need for be some sort of reality check, not just from the public funding sources, but also from the universities themselves. But again, how do you compare very different programs and departments along multiple dimensions in a way that does not simply reflect the bias in the weighting of various criteria? McGill is currently embarking on a re-evaluation of its place in the world in the strategic reframing initiative, and part of it is to get some indicators of performance. The initiative seems to build on the expertise of the same consulting firm that helped shape the British evaluation mechanism, which has some people understandably worried.