Myth Busters M&E Edition


(Mia Schmid-Velasco) #1

There was a popular television show in the US a few years back called Myth Busters where the show’s hosts used elements of the scientific method to test the validity of rumors, myths, and news stories. Unfortunately, to my knowledge the show never attempted to bust some common myths about monitoring and evaluation — I can’t imagine why not, it’s such a thrilling topic! :laughing:


giphy%20(2)


Therefore, in this post I’m going to bust 3 common myths about M&E in part 1 of Myth Busters M&E Edition. I came up with too many myths that a part 2 will be posted sometime next week.

Without further ado, here are 3 common myths about M&E:


Myth 1: M&E = Data

There’s a common belief that monitoring and evaluation is the same as data. As I referred to in a previous post, M&E is core to program strategy. It is so much more than data.

M&E helps improve progress towards and achievement of results by extracting, from past and ongoing activities, relevant information that can subsequently be used as the basis for programmatic fine-tuning, reorientation, and planning. Without M&E, it would be impossible to judge if work was going in the right direction, whether progress and success could be claimed, and how future efforts might be improved.

Data is a tool used in the M&E process described above, but it’s not the entirety of it. Data are the specific quantitative and qualitative information or facts that are collected and analyzed that help us understand our impact.

The problem with equating M&E and data as the same thing is that often people jump to thinking about the tools needed to collect information before systematically developing a plan that outlines program goals, intended outcomes, and the key strategic questions to guide our efforts. Data will not tell you much if you don’t know what need to know.

Although I just debunked the myth that M&E is the same as data, the next two myths focus exclusively on data itself because I have found that data collection is often fraught with misconceptions about what is considered important and valid.



Myth 2: More data is better

Nutritionist’s often recommend a diet of 'everything in moderation’. The same principle applies to data collection. More data is not necessarily better.

Too%20much%20data

The comic above depicts a common mentality — collect it just in case we need it. This approach is actually detrimental to our work. An excess of data often leads to:

  • confusion about what data to look at and how we’re using that information
  • wasted time and resources collecting information that is never or rarely used

We should not be collecting data that we do not use. Our data collection should be guided by information needs that really matter right now, rather than data collection for its own sake.

There is a movement towards data minimalism that is focused on purposeful data collection to facilitate decision-making. Rather than spending our time collecting hundreds of pieces of data, our time is better spent at the front-end to:

  1. Reflect on what we really need to know — this requires that we put on our investigative hats and ask ourselves tough questions. Do we really need to know this? How will we use it? What resources are required to collect it?
  2. Use the CART Principles — I love these principles developed as part of the Goldilocks: Right-Fit M&E initiative. Make every effort to ensure your data collection is:
  • Credible – Collect high quality data and analyze them accurately.
  • Actionable – Commit to act on the data you collect.
  • Responsible – Ensure the benefits of data collection outweigh the costs.
  • Transportable – Collect data that generate knowledge for other programs.



Myth 3: Quantitative data is more valid than Qualitative data

There is a prevailing belief that numbers are more important, valuable, and useful than narrative information. Quantitative and Qualitative data are different, but one is not more important or more credible than the other.

First, let’s look at how they are different:

  • Quantitative data is often represented as numbers and help to answer questions such as, “How much…?”, “How many…?”, and “How frequent…?”. Quantitative data collection is best used for understanding what is happening in a program.
  • Qualitative data are usually in the form of text or narrative and helps to answer questions such as, “Why are some participants more active in the program?” and “How have participant’s understanding of the law changed since the program started?”. Qualitative data collection methods are more appropriate for understanding people’s attitudes, behaviors, beliefs, opinions, experiences, and priorities.

Rather than thinking you need to choose between either quantitative or qualitative data, it is actually quite powerful to use numbers and narrative together as described here. Collecting both quantitative and qualitative data bolsters our understanding of our impact, helping us answer our questions to know what happened, why it happened, and how it happened.

@ws_learning


Request for feedback: Strategic Plan for Tanzania IFP Alumni Association (TIFPA)
(Helena Chongo) #2

Hi Mia, this is a very helpful information.

Thanks for sharing with the team.

Helena


(Mia Schmid-Velasco) #3

As promised, I’m back with Part 2 of Myth Busters M&E Edition. Here are 3 more common myths about M&E:

Myth 4: E can be done without M (AKA E is more important than M)

There is a reason why it’s Monitoring and Evaluation, and not the other way around. In my experience, too much emphasis is put on Evaluation and too little is put on Monitoring. In order to do an evaluation well, you need to have also been monitoring your project or program well.

So what is the difference between monitoring and evaluation? And, why do we need both?

I find these two definitions of monitoring and evaluation to be helpful:

Monitoring is the ongoing tracking you do to understand the extent to which your program is operating consistent with how you intended it to.

Evaluation is a systematic inquiry that is done at key moments to inform decision-making and improve programs. Systematic implies that the evaluation is a thoughtful process of asking critical questions, collecting appropriate information, and then analyzing and interpreting the information for a specific use and purpose.

For more M&E definitions, see this glossary.

Monitoring and evaluation are synergistic. Monitoring information is a necessary but not sufficient input to conduct an evaluation. While monitoring information can be collected and used for ongoing management purposes, it can also help to identify potential problems or issues requiring more detailed investigation via an evaluation. More on this here.

To summarize, Monitoring is ongoing and focuses on what is happening, while Evaluation is conducted at specific points in time to assess how well it happened and what difference it made. Here’s a nice two-pager on the differences between M and E.


Myth 5: M&E is the same as Research

I often hear people refer to monitoring and evaluation activities as research and vice versa. M&E and Research are similar in some ways, but are also quite different. I want to dispel the myth that M&E is the same as Research to reduce confusion and to get us all on the same page when it comes to what we do (and do not do) in M&E.

Research is concerned with creating generalizable knowledge for a field, while Evaluation is concerned with producing specific, applied knowledge about the effectiveness of a particular program or model.

Evaluation and Research use similar methods for collecting data, but their purposes and intended uses are quite distinct and we should be cautious not to confuse or conflate the two.

I like Michael Quinn Patton’s (a leader in the evaluation field) description of the difference between evaluation and research,

Evaluation generates improvements, judgments, and actionable learning about programs. Research generates knowledge about how the world works and why it works as it does.

He created great evaluation flash cards, with card #5 focused on evaluation vs. research. Check out the flash cards here. The flash cards include this helpful table outlining some key differences:

Research%20vs%20Evaluation%20table


So let’s dispense with referring to our work as ‘research’ once and for all — unless it meets the criteria listed above.


Myth 6: Research is more important than M&E

Now that we know how Research and Evaluation are different, I also want to challenge the belief that research is more important. I find this to not only be inaccurate but also damaging to the perceived value of M&E.

M&E is absolutely necessary for organizations to understand if their program is being implemented as planned and to know what impact it is having. Research is not necessary for all organizations — we can draw on work done by universities, think tanks, and research institutions to inform what we do.

In my experience I have found that people tend to hold research on a pedestal as something that we should be striving for, while M&E is a burden that should be avoided. Collecting data in real-time that enables you to monitor how well your program is doing can be quite powerful — if used meaningfully — what we do under the umbrella of “M&E” has the potential to create lasting impacts on the way programs engage with data, think critically about improvements, and ultimately use that information to make a greater difference in the world. I’d say that’s something we should all be striving for.

@ws_learning


New member introductions (8 March to 21 March, 2018)
(David Arach) #4

So interesting. For sure, this is a myth buster!!!

David


(Tobias Eigen) #5

Yes! Lists are fun to read and help to focus the mind. :slight_smile: Which one is your favorite, @davidarach?

Personally, I like Myth 4 best - it’s a reminder to build learning into projects from the outset.



How likely are you to recommend the Global Legal Empowerment Network?



Thank you. What can we do better?

Thank you. What can be improved?

Fabulous! What do you like most?

Thanks for giving feedback! If you’re reporting a problem, please tell us what you were doing when the problem occurred, what you expected to happen and what actually happened.

 

skip this step