When Dave Evans at the World Bank invited me to participate at a “smackdown” between evaluators and operations experts who worked on Community Driven Development (CDD), I was skeptical. One of the great advantages of working on randomized evaluations is the close and cooperative way in which evaluators and implementers work together (see my previous blog). The idea of deliberately highlighting the differences between the two groups seemed wrong. But Dave convinced me that a format set up as a mock fight would attract more interest and he was right: the two sides sat in the middle of a standing room only crowd which added to the atmosphere of a boxing ring.
Team Operations was: Daniel Owen, with whom I had worked on a randomized impact evaluation of CDD in Sierra Leone; and Susan Wong, who is an accomplished evaluator of CDD as well as managing CDD programs in Asia and Africa. On the research side ("Team Impact Evaluation") was Biju Rao, Macartan Humphries, and myself. Biju’s summary of the evidence on the participation in development exposed the lack of rigorous evidence on the impact of CDD and caused a stir at the Bank when its first draft circulated in 2004/5. To their credit, people in operations, like Dan Owen, responded by supporting a number of rigorous impact evaluations of CDD including three randomized evaluations in Sierra Leone, Liberia, and the Democratic Republic of Congo (Macartan is a coauthor on the latter two studies). Other evaluations, like those by Susan Wong, tested alternative approaches to CDD. As a result we now know a lot more about CDD, much of which was discussed at the smackdown.
I would highlight a few points which emerged:
Delivering the goods: Macartan and I noted how the Sierra Leone and DRC programs we evaluated were effective in bringing public goods to poor communities despite poorly functioning post war governmental systems.
Compared to what? Susan pointed out that it was important to compare CDD to the next best alternative. When the Bank went into dysfunctional post war environments, CDD was often the only way to get money to communities to help them rebuild quickly. Even outside postwar environments there were often no functioning government structures at the very local level. If donors wanted to work at this local level they would inevitably have to have some institution building component of the type often found (in varying intensities) in CDD projects.
No spillovers: Macartan and I emphasized that in our studies while the programs themselves had participation from women and minorities, CDD was not successful in making local decision making processes more open and transparent: the inclusive decisions making stayed within the project. Dan argued that with more time and better measures of decision making the effects would spill over. My view is that we have developed some good measures of participatory decision making, with the Sierra Leone and DRC studies good examples of this (see Chapter 5 of Running Randomized Evaluations).
- What we don’t know: One important gap in the evidence is the extent to which the emphasis on participatory decision making within the project is the reason these projects were successful in delivering the goods.
The audience was asked to vote at the beginning and end of the discussion on two questions: i) how can operations better foster CDD (based on evaluation evidence); and ii) how can researcher be improved to make it more useful for operations? The results of the votes are below.