What research is needed to build the field of legal empowerment?

Last year Namati published a review of all the existing impact evidence on legal empowerment as we could find – the paper covers 199 program evaluations, academic articles, program documents, and case studies on civil society-driven legal empowerment programs.

Because we coded all of these studies – by LE approach, types of impact, issue areas, etc. – a big part of the paper describes what aspects of LE have been more studied and show greater evidence of impact, as well as what areas of the evidence are relatively thin.

There isn’t much research on legal empowerment programs that engaged directly with private firms. Few pieces of evidence covered LE programs in “repressive regimes.” While quite a few studies fell into the issue area of land/natural resource rights, little evidence related to the environment or showed environmental outcomes. There were 45 studies covering results of paralegal or citizen advice bureau interventions – more evidence than we expected – but much of it focuses only on the resolution of the immediate case without exploring other potential impacts.

Areas where we recommend further research are noted throughout the paper, but pages 42 to 44 summarize some of the key research gaps (where little research has been done to date) as well as many other questions about LE impact that our methodology couldn’t address.

Which of these research gaps do you see as a priority for the field of legal empowerment to fill?

Does your organization have recent impact evidence or ongoing evaluations that contribute to building the evidence base on legal empowerment in these or other areas?

Lots of impact evidence is already available in Namati’s Resource Database - please help us build the collection by uploading your own evaluations and impact-focused case studies!


Two questions that the Community Land Protection program is grappling with relate to scale:

First, how much territory can one paralegal (or Community Land Mobilizer, in the case of our program) effectively cover? What population? Is there some type of nested structure or hierarchy that would work more effectively - such as having a regional paralegal who supports a network of in-community based paralegals?

And second, at least for programs working on land or resource justice issues - what is the most effective level or scale at which to work? For example, in the Community Land Protection Program, we want our work to cover as large an area as possible to protect the most land, but on the other hand as land area increases the population that needs to be meaningfully engaged also increases - how do we determine the ‘optimal scale’? And at the same time, research on resource management, institutional analysis, and common resources seems to indicate that the most effective level for different types of resource management rules and institutions varies according to the nature of the resources and ecosystems in question. How can we factor considerations like this into the design of a paralegal-based program, when choosing at which scale to work?

1 Like

Resurrecting an old thread: is the appendix for the study “What Do We Know about Legal Empowerment? Mapping the Evidence” available with the article coding included? I would love to isolate the articles by impact evaluation type for some internal learning. Thanks!

1 Like

Hi @anon82490444! Thanks for your question.

We haven’t shared the coding publicly before (only the list of evidence here) - but if you have particular impact evaluation types of interest I’d be happy to pull a list together of the articles for each.

Let me know if that would be useful!

Hi Laura,

Thank you so much! I don’t want to trouble you, but particularly I am interested in examples of quantitative impact evaluation for legal empowerment projects.

We want to do an impact evaluation of some of our own projects, but I want to make sure that a qualitative approach isn’t taking the easy way out. As it stands, I have yet to see any examples of legal empowerment evaluations that do quantitative well with external validity that make the project worthwhile. I would even accuse some of the lower-quality evaluations out there of p-hacking!

Of course I understand the temptation, given funders (and the world’s) obsession with quantitative data these days.

Still it would be good to see if there are any strong examples of a useful quantitative impact assessment! I tried to pull up the paper’s studies when relevant, but it took me a while going by one. And as the methodology notes, some of the ‘evidence pieces’ are a bit more loosely defined and