Paul Farmer on what an ethical research agenda looks like

I like Dr Farmer’s argument for less emphasis on randomized controlled trials, and for more emphasis on “operations research” and observational studies. I think the arguments apply equally to the legal empowerment movement.

Would love to hear your thoughts.

v

2 Likes

Apologies for coming late to the game, but this is an issue that (you know) I’ve been interested in for quite a while. At the risk of being branded a Randomista, I worry that this sort of argument is (or can be) used to justify less-than-rigorous studies when more rigorous ones can be done. I feel like Dr Farmer’s arguments against randomized trials are more arguments against poorly done randomized trials. Yes, if you’re investigating the impact of a program on a particular community, it doesn’t make much sense to do a randomized study on a population that doesn’t look much like your target community - in that sense, it would indeed lose context and power. But, my experience has been that, in the vast majority of program trials, there is far less available “program” than would be required by the available “population”. In that context, it is almost always possible to say “we think program X will impact metrics A, B and C in population Y, and we have the resources to provide Z units of program during this trial, so let’s randomly assign those Z units within the Y population and select a similar (or at least statistically significant) control population from Y, and then measure A, B and C in both the treatment and control populations to see what impact it really has.” I’m a bit cynical, but I feel like a good bit of the push-back against randomized trials is that A) they can be intellectually hard to design, and B) make it more difficult to retrospectively justify a program that didn’t actually achieve its objective. They also tie implementors hands (in a good way, if good research is the goal) so that the program can’t be constantly tweaked throughout the application - since this process ends up telling us not how effective the program really is, but how effective the program-tweakers are. And they’re not what is going to be made available to others as the program is ramped up (or expanded outside the original treatment area). History is littered with instances of development programs that, through some sort of observational or case-study analysis seemed (or, more cynically, could be argued were) effective, but ended up not being.

That being said, I think there’s definitely a place for case-study analysis in tandem which can help inform our understanding of mechanisms of effect, and provide us with answers to how we might improve our programs, why our programs might not be working as expected, or how generalizable our programs might be. And, if the academic community has come down so hard against them that the results cannot get published, this is obviously a bad thing. But I don’t think they’re an alternative to rigorous randomized studies to determine the true impact of the program - particularly if the objective is to determine whether significant amounts of limited resources should be applied to a (potentially) effective pilot program.

2 Likes