The complex ethics of randomized evaluations
/When I gave a lecture to the faculty of Fourah Bay College in Freetown, Sierra Leone this past February, the first question I got was about ethics, but I was not asked whether randomized evaluations are ethical, as I often am when I present in the US or Europe. Instead, a chemistry professor was concerned that the validity of his study on iodine deficiency was being undermined by his inability to get parental permission for urine samples from schoolchildren. Iodine deficiency was potentially a major problem in this community and understanding the problem was essential to solving it. The children themselves were happy to provide urine samples, there was no health risk to the children, but tracking down their parents and getting consent was very hard and he risked getting a biased sample of respondents. Was it possible to reduce the reliance on parental consent for collecting these samples?
I suggested that he should get parental consent in all these cases because people get very sensitive about collection of bodily fluids and while there was no risk of physical harm, parents would likely feel upset if they found out that their child’s urine was collected without their consent. So we brainstormed ways to get higher consent rates at low cost.
But the story illustrates a number of points:
i) Many of the important ethical challenges that those doing RCTs face are not specific to RCTs, but rather apply to anyone collecting data on the ground. They are about how to ask for consent (i.e., determining when oral consent is sufficient and when written consent is needed) or how to store data in a secure way. Alderman, Das, and Rao (2013) have a nice paper about the practical ethical challenges of field work that is part of a series commissioned for The Oxford Handbook of Professional Economic Ethics, edited by George DeMartino and Dierdre McCloskey.
ii) The perception of the ethics of randomized evaluations is often very different in the US and Europe than it is in developing countries. It is even stronger if we move from a setting like Fourah Bay to the communities where we work. The poor are used to scarcity. They understand that there are often not enough resources for everyone to benefit from a new initiative. They are used to an NGO building a well in one village but not in another. The idea that the allocation might be based on a lottery in which each community has an equal chance is often regarded as a major improvement on the more normal approach of resources going to communities near the road or to those with connections.
iii) There are complicated trade-offs involved, with real costs and real benefits, a complexity that is often missed in the debate about the ethics of randomized evaluations.
The real ethical issues involved in randomized evaluations are rarely about whether an RCT leads to “denying” someone access to a program (an argument put forward by, for example, Casey Mulligan and ably answered by Jessica Goldberg).
Nor does it make sense to lump all different types of randomized evaluations into one category. A lottery around an eligibility cutoff raises different issues than a study in which services are phased in randomly, for example. A recent debate on the Development Impact blog made this point (see responses here, here, and here), and in Chapter 4 and in the lecture notes for teaching the ethics of randomized evaluations (which includes practical tips for ensuring compliance with IRB) we systematically go through the ethical issues associated with different forms of randomized evaluations. These notes also provide practical tips for ensuring compliance with IRB, an area where we provide graduate students and new researchers with only a little guidance (although I am pleased to report Deanna Ford from EPoD recently ran a session training graduate students from Harvard and MIT on these issues).
But some of the toughest issues around the ethics of randomized evaluations (and other empirical work) go beyond the current debate, and the answers are not straightforward. Shawn Powers and I have written a much longer piece that goes into these issues in some detail and will form another chapter in The Oxford Handbook on Professional Economic Ethics. Some of the issues we discuss that have no easy answers include:
i) What is the line between practice and research in economic and social research (a question the Belmont Report explicitly refused to address)? Clearly an evaluator cannot always be held responsible for the ethics of the program they evaluate (think Josh Angrist, who studied the impact of the Vietnam War by comparing lottery winners and losers. He could not have sought ethical approval for the Vietnam draft lottery. The war and the draft were not “research,” only the analysis was. At the other extreme, when a researcher designs a new vaccine and tests it, both the risks of the vaccine—and not just the risks of taking a survey—should be reviewed. In other words, the program (the vaccine) is considered research. But there is a lot of grey area between these two extremes. What if the program would have gone ahead without the evaluation but the researcher gives advice to the implementer to help them improve the program—does the entire program fall under the jurisdiction of the researcher’s IRB? The line between practice and research has important practical implications. In particular, if the program itself is considered research then informed consent needs to be collected from everyone who participates in the program, not just those on whom data are collected This is a particular issue in clustered randomized evaluations, where only a fraction of those in the program usually have their data collected. While “community approval” can be gained for a clustered randomized trial from a community meeting, it is not as good as the consent given as part of data collection. (Some in the medical profession seem to have got around the difficult issues in clustered randomized evaluations by saying that the “subject” of the study is the medical provider (as they are the level of randomization), so informed consent is only needed from the provider. As the provider has an ethical obligation to do the best for their patients, the researcher need not worry about ethics with respect to the patient).
ii) What happens when a western IRB has a different view of ethics than the government or ethics board in the country where the study takes place? Whose position prevails? Specifically, imagine a researcher at a US university or the World Bank is advising a government on a national survey run by its statistical agency, and the researcher will have access to the data from the survey and will use these data in their research, Can the US university tell the government how to run their survey? Can they tell them how to store their data, or only how the US researcher stores their data? (We have all seen government data held in insecure places.) Imagine the government plans to collect data on drug stocks and theft of stocks from clinics. The US university wants there to be a consent form in which clinic staff can refuse to answer. The government says that these are government employees and the drugs are government property, thus all clinic staff need to answer the survey as part of their employment duties. Should the US-based researcher refuse to participate in such a case?
iii) What data can be made public and when and how is it possible to share data between researchers? Names, addresses, and phone numbers need to be taken out before data are made public, but what other information could be used to identify people? If we strip out village name and GPS coordinates, the public data lose a lot of its value to other researchers. It is not possible, for example, to check whether researchers addressed spillover issues appropriately. If other researchers promise to keep the data confidential, and follow their own IRB standards, can we share the village name with them? But does sharing violate our agreement with the people we survey? My colleague at J-PAL, Marc Shotland, is currently working with MIT’s IRB to think through how informed consent language can be adapted to gain consent to share GPS data between researchers more easily while preserving confidentiality.
These are only some of the practical challenges that researchers face every day when conducting RCTs and other empirical work that involves data collection. We hope that this contribution will move the debate on the ethics of randomized evaluations and other empirical work onto these issues that don’t necessarily have easy answers, and where a debate could generate some light rather than just heat.
Further reading:
- Lecture notes on ethics
- Rachel Glennerster and Shawn Powers's chapter from the The Oxford Handbook on Professional Economic Ethics