But, too often, we try to collect data that isn't there (yet) - or overlook valuable insights from stakeholders that can guide the project towards successful achievement of results.
There Are Four Different Levels Of Information We Collect From End Users Throughout A Project's Life-Cycle.
And other stakeholders - those indirectly affected - can give us insights, too.
Of course, if a project has just started, it's obvious that it's way too early to measure impact - so what can we reasonably expect at different points, and how can we build up a consistent picture of what is happening on the ground on what changes are being realised?
The four levels of information we can look for are:
- Reactions and Feelings
- Learning and Knowledge
- Change in Behaviour
Some of these emerge early in the project. Others take longer to appear. But knowing what to expect, and building on the levels as we start to see change, gives us a better, more complete understanding of the project's strengths, weaknesses and benefits.
While effective evaluation needs to address all four levels. However, at earlier stages in the project we may not expect to be able to easily evaluate the higher levels.
They have to feel that their needs and interests have not only been considered, but are actually reflected in the project. Their main experience of the project - especially in the early stages - are of the project's inputs and activities.
While positive reactions from end users doesn't necessarily mean your interventions are having any impact, we still need to evaluate whether the work we are doing is appreciated and understood.
Positive reactions can indicate that the plan has considered end users’ immediate needs and has been communicated clearly.
Looking at this level is very important when considering the appropriateness of the methods we are using in the field. End users’ satisfaction with service delivery is essential for project success. This can be investigated even at the earliest stages of a project.
Measuring stakeholder satisfaction can be embedded into our ongoing Monitoring and Evaluation - same target group, same frequency. And in a good M&E plan collecting data on reactions and feelings can be integrated along with our stakeholder management plan (communicating with our end users) as well as delivering services. In short, we aim to do all three - delivering project services, measuring satisfaction and communicating.
Focus Group Discussion and community meetings work well for this, clearly.
With larger target populations, a survey may be an option. But bear in mind surveys need careful design, don't really build relationships and seldom get the true picture. Selecting key informants for interview can also help us get more depth of understanding.
The case study method is also something that can be initiated early on and, if done consistently, will lead to a lot of depth of information.
These more-or-less reflect our first level of results - the project outputs which, together, are expected to bring us to the outcome.
Depending on your project, these could cover things such as:
- Understanding of improved hygiene practices
- Awareness of how to prevent STDs
- A change in attitude towards girl education
- Knowledge on family planning methods and available services
- Knowledge on new agricultural techniques
These are all things that have to take place within the target community before we can see the next level - change in behaviour.
Again, we want to get this information directly from the source - the end users and key stakeholders themselves.
FGD is a great way to explore opinions and knowledge in a group setting; and, again, a survey could be of value. (When measuring attitudes, scaled reactions to statements such as 'Girls should stay at home and help their mothers' where respondents score from 0 - 10 can be useful - as we watch values shift in response to the project. If you're wondering what this is called, it's a Likert Scale, by the way.)
Interviews, if conducted by a skilled interviewer, can tell us a lot - and if you initiated the case study method you can get a second level of information from your selected cases.
Learning is one thing - applying is another. And while attitudes may change, it doesn't automatically follow that behaviours and practices will change.
We all know how hard it is to change behaviour - from unused gym memberships to early marriage - change is always hard, however beneficial.
That, of course, is why we have assumptions. And why projects aim not just to create demand (interest in family planning, willingness to send daughters to school) and supply (provision of family planning services, girl-friendly schools) but also remove obstacles - often those of cost or access.
And if we've done all those things - and we are talking now about a much later stage in the project - then, and only then, can we expect to see changes in practice such as:
- Improved hygiene practices in the IDP camp
- Increased use of condoms by commercial sex workers
- More girls enrolled in secondary schools
- More use of family planning services
- New crops planted, new planting methods applied
What we are really measuring here is the project's outcome - what it agreed to deliver in terms of change by its end. And we don't have to wait till the project is completed for this - there are milestones along the way that will tell us that we are progressing towards this key result.
But without positive reactions and feelings we won't see learning. And without learning, we won't see change in behaviour. So remember we are collecting data on more and more levels each time.
What methods can we use here? Certainly the earlier methods should be used (simply for consistency - after all, we are gathering data on all the levels) at this point, so we want to continue to compare things such as reactions and feelings over time. (A project's relevance can change as end users' aspirations and expectations change - nothing is static!)
And self-reporting on behaviour can be useful in sensitive areas we can't directly observe. But in many cases, observation is key - and there is no excuse for not actually observing farmers at work - buying seed, planting, managing, harvesting ...
In other cases, there may be secondary data available - family planning clinic records can tell us about use of services; school registers on girls' attendance, completion and grades.
(So, if you are going to use any extra tools at the outcome level, you'd better remember to have a baseline from the same target population at project inception, too!)
Yet looking for this level of information examines the change we are striving for as a result of changed behaviour. Has mortality decreased? Is maternal health improved? Are we seeing improved livelihoods and better nutrition? Or did things stay just the same, despite our successes at output and outcome level? What's really changed?
Certainly, secondary data can give us figures on some things such as public health. But it's still the stakeholders who know the real story - and can tell you whether it was all worth it.
At this point we are looking at all the levels - reactions and feelings, learning, behaviour and impact. Again, a range of tools may be needed, and this is where the case study method (if you included that in your M&E toolbox) can give us real depth of understanding.
At impact level, MSC (Most Significant Change) is another excellent evaluation tool that can allow us to step outside the comfort of our preset indicators and really listen to stakeholders' and their wisdom - letting them tell us what impacts (positive or negative, intended or unintended) the project had for their community.
So, when planning for data collection always take the pulse - listen to communities, their reactions and feelings. Over time, start to measure the results - learning, change in behaviour and, eventually, impact / effectiveness.