Disclosing the IDA Country-Performance Ratings
The World Bank manages a very large pool of donated development resources from the International Development Agency, IDA. To allocate these resources among the countries and also to decide whether these resources should be given out as loans or grants, they use a methodology that reviews many different variables and ends up in a product known as CPIA. These Country Performance Index Assessments had previously been disclosed in a sort of fussy way (with countries being classified in quintiles), but now there was strong pressure from most of the donors that these Country Performance Index figures should be disclosed in much more detail—I guess to reward the good countries and shame the others.
Nothing wrong with that, except if you thought that those performance indexes, though probably a good approximation of the reality, did not reflect all the difficult aspects of development and could also, if taken on their own and erroneously interpreted, make development more difficult.
I had had enough with organizations publishing indexes that were later appropriated by the selfish interests of others, making them mean more than they really do, without the originator responding clearly enough—as it was honored just by the attention given to its index and did not want to hurt a member of its fan club. And so the table was set to ignite my devil’s advocate genes.
A first round of comments
As so many of us invest much of our hopes for a better tomorrow in achieving a more transparent society, it is important that we always remind ourselves that there is nothing so nontransparent as a half-truth, said at the wrong time, at the wrong place, and not in a fully comprehensible way.
But, in a world that frequently demands that information be transmitted through easily digestible means, such as oversimplified rankings, we would rather have the World Bank (WB) doing it than any of the many not-accountable-to-anyone rating agents that frequently pursue undisclosed agenda. That said, it is not an easy decision for the WB to get into the rating game, and much care is needed.
It is by definition an impossible task to compress in an adequate way all the very complex realities of a country into a simple index or rating, and in doing so it is absolutely certain that many mistakes will be made. On the other hand, one also needs a simple, comprehensive and understandable tool to be able to convey results powerfully, and a simple index or rating can do just that. The balancing of all the various elements and contradictions needs to be done with much concerned carefulness. If a minor agent such as an NGO would go wrong, there might not be much to it but if it is the knowledge Bank that puts forward the impreciseness, this could be leveraged into extremely negative consequences.
If we were to use only the term of an “IDA Resource Allocation Index” this would make the whole disclosure more transparent and honest, as this would indicate that when monitoring results and performance, it takes two to tango, the evaluated and the evaluator, and anyone—or both—could make mistakes. We should ban, forever, the use of the very arrogant and error-prone term “Country Performance Ratings.”
Friends, it is just because we believe in disclosure that we should strive to find the right disclosure.
Development alchemy
Somewhere in the documentation the “Weighting Procedure” is described as taking four parts of CPIA and one part of ARPP (I don’t remember what ARPP stands for but I guess that is not so important either) and multiplying this by a “governance factor that is calculated by dividing the average rating of these seven criteria by 3.5 (the midpoint of the 1—6 rating scale) and applying an exponent of 1.5 to this ratio.” Friends, whatever it means, this sounds just too much like a “Potteresque” development alchemy that even a studious Hermione would find difficult to understand.
Excuse the jest, my colleagues, but we could encounter some serious risks to our reputation. For instance, the Credit-Rating Agencies—at least with respect to sovereign credits—refuse to describe their methodology in too much detail, and they officially justify this refusal by stating that they do not wish the countries to know about how to get a great rating—although I truly suspect that they just don’t want to be called on their bluff. In our case we might be expected to come up with a more substantial explanation than the mixing of the potion above—and what if we can’t?
Confusion
I have heard that “ratings should depend on actual policies, rather than intentions,” which sounds quite right, especially remembering that the road to hell is paved with good intentions. However, I have also heard that “the criteria are focused on policies and institutional arrangements, the key elements that are within the country’s control, rather than on actual outcomes.” Simplifying arithmetically both possibilities, it would seem to me that we could end up with a rating system that depends on policies rather than results. This, although perhaps quite acceptable to a private charity, seems a bit out of place for the World Bank. But then again, I might just be confused.
About the Panel of Experts
Dear Colleagues,
With respect to the Terms of Reference (TR) for the Panel of Experts—I would like to make the following brief comments.
To begin with, I believe we should avoid qualifying the panel as a “Panel of Experts,” as clearly the whole issue of rating, in this case of development adequacy, is governed by subjectivities and not by that kind of know-how that permits anyone to represent himself or herself as an “expert” without being deemed presumptuous. A reference to “experts” may also convey an unearned sense of precision that might backfire.
Reading through our methodology manual, we understand that if the Panel is to review whether the criteria used in the ranking provide an adequate basis to asses the quality of policies and institutions and, at the same time, the quality referred to means the degree to which a country’s framework is conducive to growth and poverty reduction, then, in all logic, it would seem that the Panel is supposed to review the whole effectiveness of the current development strategy of IDA. Although this could perhaps be a welcome exercise we find it hard, if not again presumptuous, to believe that it could be done during just 48 hours in March, unless of course the Panel is just called to apply freely any preconceptions of their own.
Frankly, when I thought about a panel in this matter, I never visualized a team able to reconstruct or even evaluate a methodology and a ranking that have been developed over many years, and I believe that for that purpose we already have other more appropriate procedures that we are already paying for and that work fulltime.
No, I thought we were talking of a Panel that could advise us on how to design and communicate and control the interpretations of the rankings, so as to maximize their potential benefits and minimize the risks, for all, of this extremely hazardous activity.
The questions for such a panel are almost infinite; for instance, how do we make sure that the ratings are not used for any wrong purpose, which could even conspire against our development mission? Is the ranking a proprietary good to be controlled and marketed? Do we allow any media to report on the ratings out of context, or perhaps even redesign their presentation, for instance by cutting the axis and thus seeming to have the WB endorsing erroneous interpretations? What are the market implications of putting the whole credibility of the WB behind a rating? Will the other raters only follow us? Will this only reinforce the cyclical nature of capital flows creating “the Mother of all systemic risks”? If it does not work, can we pull out?
With respect to the list of the named “experts” not knowing them, I cannot endorse or object. Nonetheless, I sincerely hope that the list does not include any ranking fanatics, but includes more the well-intentioned healthy ranking skeptics who, aware of the needs for rankings, are also conscious of the risks. In other words, I hope the panel does not end up being a panel of “formula builders” but a panel of people who know why, when, and how to use the ranking formulas.
Some follow up comments
It is symptomatic of the many difficulties with rating that the request by the Panel for more analytic work in relation to the weighting system was answered by the Bank with a very basic equal weighting, backed by some correlation analysis. Equal weighting could very well be the best answer, at least in simplicity, but it is also a very normal and transparent way of admitting to not-having-a-clue. In the area of the CPIA ratings, we are indeed walking on very loose sand, and we need to be very careful, come disclosure time.
When thinking about all the suggested CPIA criteria, I would have many fewer problems with fully disclosing the exact information on each of these fifteen criteria, than with having them all stirred into a murky and totally nontransparent cocktail.
I saw that some Panel members noted that once the disclosures occur, “The Bank should closely monitor any potentially adverse impacts on borrowers, such as misinterpretation of ratings by financial markets, any impact on foreign direct investment, and/or the abuse of ratings for political gains.” This is all well said but, in today’s world of rapid and active communication, those are the things you analyze before you communicate and not after. We wonder whether now is not the appropriate time to leave out the economists, and call in some experts on communication.
Let us never forget that the rankings, unfortunately, say almost as much about the one doing the ranking as they do about the one being ranked.
Just as an example of rankings gone haywire, let me refer to the Globalization Index published some months ago by A. T. Kearney and Foreign Policy. Their ranking method assigns globalization points to countries by measuring the number of internet users, hosts, and secure servers, without even bothering about what contents are transmitted over them and similar media. Who is more global, a family of a rich country with ten televisions for ten local sitcoms, or a family of a poor country with only one TV, but who mostly have to watch foreign programming, and many of whose family members are working abroad?
Let me again illustrate three risks that I believe have not been sufficiently considered.
· When discussing Argentina we heard comments as to how it changed (almost overnight) from being A Golden Poster Boy into an Ugly Duckling. This begs the question of what could have happened to the reputation of the Bank if our ratings had officially indicated very good results over a long period and then, suddenly, something went wrong. Would creditors sue us? Would credit rating agencies sue us?
· There is always a risk present when good ratings are thrown in the face of crude realities of poverty, since the hope of food for today is not easily appeased by the promise of food in a year or a decade.
· The risk that bad ratings could in fact be turned against the interests of furthering development, for instance when bad ratings are brought forward by bad governments as an evidence of their ability to defend their “true” sovereignty.
In the very little which our Panel of Experts (mercifully not eminent persons) mentions about disclosure, it urges for the preparation of adequate information to help the public interpret the ratings. This recommendation stands, at least in terms of transparency, in stark contrast with the other recommendation “that the write-ups that accompany the ratings should not be disclosed—as this might discourage candid assessment by staff.” We do not believe that in this respect you should be able to have your cake and eat it too, and so, if you are prepare to disclose a rating, you must be willing to disclose fully how you got it.
About our own accountability
Of course I agree with the recommendation that the disclosures of any results should always include a statement indicating that the ratings are the product of “staff judgment.” That said, and given the importance of checks and balances, and accountability, we would like to know a little about the foreseen consequences to staff. This is no minor issue as their judgments, if wrong, and even if right, could foreseeable bring down governments and also stoke anti–World Bank sentiments. I need to bring this up, as the debate reminded me of the note I wrote about the US GAO Report on the IMF.
What if they rank us?
Finally, as we see in the documents on the IDA disclosure policy that this exercise generates a normal distribution curve where we can point out the best and the worst performers, I cannot but reflect on the fact that within the Bank, for its own internal evaluation purposes, it seems impossible to gain acceptance for this sort of useful ranking tool. In fact, in most internal evaluations that are presented to the Board, we have not even reached a name disclosure by quintiles and have been basically limited to a binary grading (satisfactory or not), mostly without really even knowing who belongs to any of those groups. Just think about our reactions if some NGO would start to rank the performance of our own country teams and to disclose the results on the www, with three-decimal precision, and arguing that, given the utmost importance of the WB’s poverty-fighting mission, this should be quite helpful