The annual Transparency International (TI) evaluation of corruption across the world, the Corruption Perceptions Index (CPI), was released on 5 December amidst much fanfare and media attention. Interactive links on the TI website, designed to help interpret the findings, allowed reporters to explore the core question posed in the CPI brochure: ‘How does your country measure up?’. In countries all over the world, these latest assessments of the extent of corruption have generated understandable interest in what are generally presented as new and noteworthy findings.
But what if we told you that we could have explained 97% of the variation between countries’ scores before the report was published? And that we could do the same for any year’s results months before they were published. Unfortunately, our ability to do this is not an indication of any psychic powers on our part (otherwise we’d have won the Euromillions by now). Instead it’s because of a simple property of the CPI: it simply doesn’t change much at all – ever.
Below is a graph of countries’ scores in 2012 against their scores in 2011. The blue line moves up and down to follow the data points as they rise and fall, the red line is a straight ‘regression line’ showing a statistical predication of where the points would fall given the data, and the purple line shows where the points would be if all scores were exactly the same in each year. The lines are so close that these three different ways of looking at the relationships do not indicate any meaningful difference. Put simply, we could have told you in 2011 essentially everything you would learn from the 2012 CPI.
Even if we take a longer-term view, going back to 2001, 89% of the variation between countries’ scores could have been predicted eleven years ago. TI has always insisted that evolutionary changes in the methodology used to compile the index mean we should not seek to make comparisons across time – but it seems that such changes have only a minimal impact on the final outcomes. Whilst some may suggest that only country ranks (rather than scores) should be compared, this is clearly nonsensical when the number of countries included in the index changes from year-to-year. Thus, North Korea’s rank improved from 182nd in 2011 to 174th in 2012, despite still being joint bottom in the rankings.
The consistency in outcomes is in fact partly an effect of the methodology TI uses, in which scores for a single year feature information which is up to two years old. Yet the resulting predictability is also a feature of the core underlying concept being measured – corruption perceptions. Perceptions can take a very long time to change. An area with a bad reputation may well keep the reputation long after the causes of that reputation have gone. As a case in point, the city of Nottingham still has a reputation as a high-crime city that has a problem with guns, despite the very significant reductions in crime seen over the last decade and the rarity of gun crime in the city. By the same token, some countries which are seen as having a ‘corruption problem’ struggle to shake the label, no matter what they do. Andersson and Heywood referred to this as a ‘corruption trap’ in their analysis, published in Political Studies, of how and why perceptions really do matter in a real-world sense. They concluded:
One potential consequence of the prevailing orthodoxy on both measuring and fighting corruption is that those countries most affected may become caught in a vicious circle: as aid becomes increasingly conditional on the adoption of western-defined measures to combat corruption, so those countries with the least resources to implement ‘good governance’ stand to suffer most from the withdrawal of precisely the support they need to stand any realistic chance of tackling corruption.
So should we simply dismiss measurements of corruption that are based on perceptions? Clearly not, as they do reflect the reality of what (some) people think, and it is important that we understand that. Instead, what is more open to question is the manner in which they are used for political ends.
It is noteworthy that in 2011 another anti-corruption NGO, Global Integrity, decided to remove from their website the Global Integrity Index which, in a manner similar to the Corruption Perceptions Index, provided a country ranking. Part of the reason, they said, was that it was:
[A] conscious attempt to reinforce a key belief that we have come to embrace after many years of carrying out this kind of fieldwork: indices rarely change things. Publishing an index is terrific for the publishing organization in that it drives media coverage, headlines, and controversy. We are all for that. They are very effective public relations tools. But a single number for a country stacked up against other countries has not proven, in our experience, to be a particularly effective policy making or advocacy tool. Country rankings are too blunt and generalized to be “actionable” and inform real debate and policy choices. Sure, they can put an issue on the table, but that’s about it.
Corruption, you might think, is an issue that hardly needs to be put on the table any more. But if you want to know the results of the 2013 CPI, then just go to the website and have a look at the results for any previous year: they will be pretty much the same as next year’s.