RCTs provide independence or objectivity through methodology
/At the recent launch event of Running Randomized Evaluations, hosted by DIME, a questioner asked about my comment that because randomized evaluations provide an independent methodology they gave evaluators the freedom to work hand in hand with implementing partners. This observation came from my experience as a member of the Independent Advisory Committee on Development Impact for the UK government’s Department for International Development (DFID). Much of the discussion focused on increasing the independence of evaluation in line with the then international standards. The concern was that if DFID (or any other agency) paid for an evaluation of its own projects there was a conflict of interest and evaluators would provide glowing reports on projects in order to get future contracts for other evaluations. Complex rules seek to avoid these conflicts with heads of evaluation in most agencies not reporting to those in charge of operations and individual evaluations contracted for separately from implementation contracts. But this independence comes at a cost of isolating evaluation from the heart of operations.
In contrast, randomized evaluations require very close working relationships between evaluators and implementers from the design of the project through implementation. Despite this relationship it is possible for randomized evaluations to provide independent or objective results (I would argue that objectivity is what we are really after). This is because, for the most part, the results of a randomized evaluation are what they are. We set the experiment up, we monitor that the protocol is complied with, we collect the data and usually only when we see the final data can we tell whether the program worked. Usually at this final stage there is relatively little flexibility for the evaluator to run the analysis different ways to generate the outcome they want to see. There are of course exceptions: in particular, when an experiment has a large number of outcome measures and it is not clear which ones are the most important we may want to strengthen objectivity by specifying how the data are analyzed in advance (see Chapter 8 of Running Randomized Evaluations). It is also important to provide independence by committing to publish the results in advance. But compared to much other evaluation work carried out by development agencies, randomized evaluations provide results which are harder to manipulate and thus are reasonably objective, allowing for the type of close partnerships between evaluator and implementer which can be incredibly productive for both sides and which helps ensure evaluations are useful and used.
As a footnote to those interested in the agency mechanics of all this, DFID now evaluates a number of its programs with randomized evaluations which are usually commissioned by operations teams but are often classed as “internal” evaluations and thus not necessarily “independent”. Also, more recent international guidance notes (e.g., from NONIE) have a more nuanced approach to independence.