PROCESS OF MONITORING AND EVALUATION

TOPIC 3: PROCESS OF MONITORING AND EVALUATION

Sub –Topics
 Difference between Monitoring and Evaluation
 Monitoring and Evaluation Methods
 Monitoring and Evaluation Tools
 Internal Monitoring and Evaluation
 Participatory Monitoring and Evaluation
 Qualities of a good Evaluator
 Response to Monitoring and Evaluation Results
 Communication of Monitoring and Evaluation Result
DIFFERENCE BETWEEN MONITORING AND EVALUATION

MONITORING AND EVALUATION METHODS
1. Monitoring and Evaluation Systems
Uses performance indicators to measure progress, particularly actual results against expected results.
2. Extant Reports and Documents Existing documentation, including quantitative and descriptive information about the initiative, its outputs and outcomes, such as documentation from capacity development activities, donor reports, and other evidence.
3. Questionnaires
Provides a standardized approach to obtaining information on a wide range of topics from a large number or diversity of stakeholders (usually employing sampling techniques) to obtain information on their attitudes, beliefs, opinions, perceptions, level of satisfaction, etc. concerning the operations, inputs, outputs and contextual factors of a project/programme
4. Interviews
Solicit person-to-person responses to predetermined questions designed to obtain in depth information about a person’s impressions or experiences, or to learn more about their answers to questionnaires or surveys.
5. On-Site Observation
Entails use of a detailed observation form to record accurate information on-site about how a programme operates (ongoing activities, processes, discussions, social interactions and observable results as directly observed during the course of an initiative).
6. Group Interviews (FDGs)
A small group (6 to 8 people) are interviewed together to explore in-depth stakeholder opinions, similar or divergent points of view, or judgements about a development initiative or policy, as well as information about their behaviors, understanding and perceptions of an initiative or to collect information around tangible and non-tangible changes resulting from an initiative.
7. Key Informants
Qualitative in-depth interviews, often one-on-one, with a wide range of stakeholders who have first-hand knowledge about the initiative operations and context. These community experts can provide particular knowledge and understanding of problems and recommend solutions
8. Expert Panels
A peer review, or reference group, composed of external experts to provide input on technical or other substance topics covered the evaluation
9. Case Studies
Involves comprehensive examination through cross comparison of cases to obtain in depth information with the goal to fully understand the operational dynamics, activities, outputs, outcomes and interactions of a development project or programme
MONITORING AND EVALUATION TOOLS
Techniques for monitoring and evaluating a project vary from one situation to another Whichever technique is used the main aim is to measure for efficiency and effectiveness of the system and competencies that exist in individuals overseeing the system and ensure
conformity to project goals, objectives and proposals The decision on tools to use depends on the kind of Information that one is gathering
The information is classified information
 Quantative data
 Qualitative data
Quantitative data
Quantitate information is obtained counting your measuring
The data is usually presented in whole numbers, fractions, decimal points, percentages, ratios, proportions etc.
Example;
How many students passed module {II} examinations?
What percentage of the class passed?
What was the ratio of men to women?
Qualitative data
This tell us about how people feel about a situation, how things are done and how people behave
The data is obtained observing and interpreting
Some of the tools used in data collection include;
 Questionnaires
 Interviewers guide
 Observation guide
Questionnaires
 A questionnaire is a tool used in information gathering information from individuals or organizations
 They can be used to measure opinions, attitudes, behavior and perceptions in specific topics
 Well-designed questionnaires are essential for collecting valid and reliable data
 Questionnaire items refer to each question that appear in a questionnaire Format of a questionnaire item
 The items can either be open ended or close ended
 Open ended items are used to gather qualitative data and the respondents are asked to respond using their own words
 The information gathered is usually bulky and requires to be categorized in terms of patterns
 Closed ended items require the respondents select a consumer from a list of provided alternatives
 It could be anonymous or like a scale (Strongly Agree ,Agree, Disagree ,Strongly Disagree)
Wording and order of item in a questionnaire
 It should be easy to read and understand
 Each item should explore one piece of information only
 All items should be related to the other on the questionnaires
 Items should be free from bias
 The information asked for should be precise and solicit accurate answers
How to design a questionnaire
1. Research questionnaire items
2. Identify the topics or area you want to research on in relation to the objective of the objective of the project
 Find out if others have undertaken same research before i.e. evaluation research .Note the projects and evaluation assignments require a lot of reading When undertaking evaluation the client provides;
 Project plan – containing project objectives, the score, result frame work. The result framework contains the project indicators ,set targets ,baseline, timeline and budget
 Project operations manual – It contains details of all activities of the project
 Project proposals – Legal frame work .Implementation status report

3. Determine the format of the items
Identify the closed ended terms of and develop response option or scale for them
Identify items to be open ended and decide how to analyse the responses
4. Test and finance the questionnaires (piloting)
One testing way is administering the questionnaire, create a more data set based on questionnaire then revise the items
Interview Schedule / Guide Refers to a set of questions that the interviews learnt to ask when interviewing
Used to standardize the interview so that interviews can ask the same questions in the same manner to all the respondents
There are two types of interview scheduled guides
 Personal
 Telephone interviews
They can be categorized as either
1. Structured and unstructured interviews
2. Focus interviews and non -directive
Structured Interviews
This makes use of a set of pre-determined questions and highly standardized technique of the recording. The interviewer is allowed ‘freedom to ask supplementary question or omit some organization ’
The flexibility results in lack of comparability of one interviewer with another Analysis of responses is more difficult and time consuming
Focused interviews It is meant to focus attention on to the given experience of the respondent and its effect The main task of the interview is to confine the respondent to the topic Such kind of interviews are mainly unstructured
Non- directed interview
Interviewer’s role is simply to encourage the respondent to talk about the topic with a minimum of direct questions The interviews often act as a catalyst to a comprehensive expression of the respondent feelings and believes
Advantages of interviews
1) They provide in depth data which is not possible to get using questionnaires
2) Unlike questionnaires the interviewers can clarify the questions resulting to more relevant responses
3) They are more flexible because the interviewers can adopt to the situation
4) Sensitive and personal information can be extracted from the respondent honest and personal interaction
5) Unlike questionnaires the interviewers can get more information asking probing questions
6) Interviews yield higher responses rate since it is difficult for a respondent to completely refuse to answer a question
Disadvantages of interviews
1) They are more expensive since the interviewer has to travel to meet the respondents
2) Interviewing requires a high level of skill. It requires communication interpersonal skills
3) Responses may be influenced the respondent reaction to the interviewer.
Observation Guide
A list of questions that guide on what to observe
Tools used when evaluating a project
i. Evaluation plan –It outlines how the project should be evaluated and may include a tracking system on the implementation of evaluation fellow
ii. Project evaluation Information sheet – A report or questionnaire presenting the project evaluation with evaluators rating
iii. Annual project report –Assessment of a project during a given year target group project management government donors
iv. Terminal report prepared implementing organizations and includes lesson learnt
v. Field visit report- It involves visiting all project and reporting immediately after the visit
vi. Minutes of annual Review- An annual meeting which generates annual reports to assess annual outputs (results)
vii. Project status reports – Helps one to understand the current status performance schedule costs and hold ups deviations from original schedules
viii. Project schedule chart- Indicates the time schedule for implementation of the project.
From this one can understand any delay leads to ultimate loss.
ix. Project Financial status report – The evaluation team is able to tell whether or not the project is being implemented within the budget.
INTERNAL MONITORING AND EVALUATION
Internal evaluation (self-evaluation), in which people within a program sponsor, conduct and control the evaluation Advantages Internal Evaluation Disadvantages Internal Evaluation
• Knows the implementing organization, its • May lack objectivity and thus reduce
programme and operations credibility of findings
• Understands and can interpret behavior • Tends to accept the position of the and attitudes of members of the organization organization • Usually too busy to participate fully
• May possess important informal • Part of the authority structure and may information be constrained organizational role
• Known to staff, less threat of anxiety or conflict disruption • May not be sufficiently knowledgeable or
• More easily accept and promote use of experienced to design and implement an evaluation results evaluation
• Less costly • May not have special subject matter
• Doesn’t require time-consuming expertise recruitment negotiations
• Contributes to strengthening national
External evaluation, in which someone from beyond the program acts as the evaluator and controls the evaluation
Advantages of External evaluation Disadvantages of External evaluation
 May be more objective and find it easier • May tend to produce overly theoretical
 to formulate recommendations evaluation results
May be free from organizational bias • May be perceived as an adversary
 May offer new perspective and additional arousing unnecessary anxiety
 insights • May be costly
 May have greater evaluation skills and • Requires more time for contract,
 expertise in conducting an evaluation negotiations, orientation and monitoring
 May provide greater technical expertise
 Able to dedicate him/herself full time to the evaluation
 Can serve as an arbitrator or facilitator between parties
 Can bring the organization into contact with additional technical resources.
PARTICIPATORY MONITORING AND EVALUATION
An ongoing and regular process which actively involves stakeholders in all the stages of collecting, analyzing and using information on an intervention with a view to assessing the processes and results and making recommendations (provide information about decisionmaking). It is a process to support the implementation of a development project/programme grassroots communities and stakeholders which strengthens appropriation, mutual responsibility, transparency and knowledge of interrelations between the results,
implementation factors and the environment.
Advantages of Participatory Monitoring and Evaluation
Empowers beneficiaries to analyze and act on their own situation (as “active participants” rather than “passive recipients”)
▪ Builds local capacity to manage, own, and sustain the project. People are likely to accept and internalize findings and recommendations that they provide.
▪ Builds collaboration and consensus at different levels—between beneficiaries, local staff and partners, and senior management
▪ Reinforces beneficiary accountability, preventing one perspective from dominating the M&E process
▪ Saves money and time in data collection compared with the cost of using project staff or hiring outside support
▪ Provides timely and relevant information directly from the field for management decision making to execute corrective actions
Potential disadvantages of Participatory Monitoring and Evaluation Requires more time and cost to train and manage local staff and community members
▪ Requires skilled facilitators to ensure that everyone understands the process and is equally involved
▪ Can jeopardize the quality of collected data due to local politics. Data analysis and decision making can be dominated the more powerful voices in the community (related to gender, ethnic, or religious factors).
▪ Demands the genuine commitment of local people and the support of donors, since the project may not use the traditional indicators or formats for reporting findings
Stages of the PME process
1. Decide to set up the system
2. Identify the Actors
3. Define expectations and objectives
4. Identify the criteria and indicators
5. Choose information collection methods and tools
6. Collect and analyze information
7. Implement actions for change
Who are the actors?
The actors are those who have an influence on or are affected the project or programme concerned the participatory monitoring-evaluation.
The identification and analysis of actors are an important phase in the establishment of the
PME sys-tem.
The PME is set up to meet the concerns of these actors, particularly those directly affected the intervention of the project or programme (direct and indirect beneficiaries). Some actors are more visible because of the positions they occupy and the roles they play in the com-munity or project. But there are also less visible actors who generally belong to socalled vulnerable groups and are most affected the activities undertaken. These groups, which constitute the primary beneficiaries of the project, should play a central role in the design and management of the PME sys-tem. It is therefore important to have an appropriate approach and tools to identify them and
examine their interests and what they expect from the project, the type of influence they can exert on the project activities and the arrangements to be made.
How to conduct the actors’ analysis?
There are several tools for that purpose. They include:
The power-interest grid:
It is used to make a simple mapping of actors taking into consideration the interest that each of them could have for the PME system to be set up, as well as the influences (positive or negative) that he/she could have on the system. To apply it, one should first, identify all
the project actors and second, prepare a typology placing each actor in one of the 4 spaces of the grid, corresponding to its interest and the importance of his/her influence. Of course such a classification should be justified. At the end of the classification, one should examine
the actions to be undertaken for each of the 4 categories of actors identified for the success of the PME system to be set up.
High Power High Interest –important actors /project beneficiaries, maintain them
High Power Low Interest –Keep them informed
Low power Low Interest –people or groups living in the area but no interest in the project
High Power low Interest- set up level of information and training needs –need for negotiating capacity
The Actors’ Analysis Grid:
It facilitates the identification of the different actors, their interest for the system, their influence on the system, and the actions to be taken to improve their participation. As can be observed, this grid makes it possible to produce the same type of information as those
generated the power-interest grid, the only difference being that the former is used to classify actors.
The two grids are also complementary since the actors’ analysis grid can be used as a backup for the organization of information generated the reflection which accompanies the preparation of the power-interest grid. But, for reasons of simplicity, one can choose to use only one of them. One should not lose sight of the fact that this process is, first and foremost, meant for the local populations and that, as a result, one should avoid using a variety of tools which would complicate the process even more.

QUALITIES OF A GOOD EVALUATOR
 Detail-oriented –with good mastery of the knowledge on the project
 Strives for accuracy in his/her work
 Thorough and persistent, following through on issues that seem to develop slowly or have some dead-ends
 Inquisitive, curious about how and why things work/operate the way they do
 Organized mentally (but his/her desk could be cluttered!)
 Strong planning skills to guide analyses and to decide on tasks and priorities
 Quick learner of complex issues
 Quick on their feet in interviews and meetings
 Intuitive sense of which issues are important to explore
 Sees the “big picture”
 Works well with people in a team setting-relates well with agency/program staff and gains their confidence
 Works well under pressure and/or with tight time deadlines
 Analytical approach to issues and creative (in considering analytical approaches and solutions to problems)
 Flexibility to adapt to changing situations
RESPONSE TO MONITORING AND EVALUATION RESULTS
One of the most direct ways of using knowledge gained from monitoring and evaluation is to inform ongoing and future planning and programming. Lessons from evaluations of programmes, projects and initiatives and management responses should be available when new outcomes are being formulated or projects or programmes are identified, designed and appraised. At the time of revising or developing new programmes, projects, policies, strategies and other initiatives organisations should call for a consultative meeting with key partners and stakeholders to review and share evaluative knowledge in a systematic and substantive manner. Users Monitoring and evaluation results are
 Donors
 Government
 Target groups
 External evaluations
 Internal evaluations
 Project staff
 Beneficiary communities
COMMUNICATION OF MONITORING AND EVALUATION RESULT
Data collected is used in generating monitoring and evaluation reports and the reports can be disseminated a newsletter through websites seminars or press releases Why disseminate Monitoring and Evaluation reports M&E results help improve your program interventions Using M&E results keeps you and your staff in a “learning mode” as you gain understanding about how and why your program is working. M&E results also help you make decisions about the best use of resources. For example, outcome and impact evaluations may provide further insight on certain risk and protective factors, thus shaping your future efforts. As staff use results to reflect on the program’s implementation and make necessary improvements, they are more likely to feel supported the M&E process.
M&E results strengthen your program institutionally. M&E results can help stakeholders and the community understand what the program is doing, how well it is meeting its objectives and whether there are ways that progress can be
improved. Sharing results can help ensure social, financial and political support and help your program establish or strengthen the network of individuals and organizations with similar goals of working with young people. By publicizing positive results, you give public
recognition to stakeholders and volunteers who have worked to make the program a
success, and you may attract new volunteers. M&E results can be used to advocate for additional resources . Disseminating M&E results can raise awareness of your program among the general public and help build positive perceptions about young people and youth programs. M&E results often shape donors’ decisions about resources in terms of what and how many to allocate to youth programs. Results can also be used to lobfor policy or legislative changes that relate to youth pointing out unmet needs or barriers to program success. M&E results should also lead to decisions about changes in program implementation. Periodic staff meetings devoted to discussing M&E results can engage staff in collectively making program adjustments. If you identify problems early in implementation, you can respond promptly modifying your program strategy, reassigning staff or shifting financial resources to improve the chances of meeting your program goals and objectives. If you used a participatory evaluation approach, you should ensure that participants are involved in reviewing results and determining how to use them. M&E results contribute to the global understanding of “what works.”
By sharing M&E results, you allow others to learn from your experience. The dissemination of M&E results—both those that show how your program is working and those that find that some strategies are not having the intended impact— contributes to our global
understanding of what works and what doesn’t work To ensure that the information reaches the target audience who include the funding
organization the project manager staff, the board of directors ,partner organizations ,interested community groups , the general public ,beneficiaries and other stakeholders.(Researcher consultants and professional agencies)
M&E results can help you design new or follow-on activities.
Programs often begin on a small scale in order to test their feasibility. Evaluation results document the strengths, limitations, successes or failures of these initial efforts and allow program planners to make objective decisions about which elements of a program to continue, modify, expand or discontinue. Elements that are not very successful but show promise can be modified for improvement. Successful elements can be expanded by: ➤ increasing their scale or scope, ➤ changing the administrative structure or staff patterns, ➤ expanding the audience and/or targeting new audiences, or ➤ spinning off separate programs Factors considered when preparing Monitoring and Evaluation Reports
 Medium of communication
 Language barriers
 Size of the report
 Sponsors or donor requirements
 Recordings e.g. board members and donor agencies
 Budget
Importance of Monitoring and Evaluation Reports
• To improve project /program performance e.g. lessons learnt are used as a basis of improvement of subsequent prospects
• For policy development- Policy planners or makers make use of M&E reports for decision making
• Well done evaluations recommend policy changes
• Advocacy to increase funding for the project
• M&E is a demonstration of accountability and transparency of an institution which improves its image and therefore attracts funding from donors
• Used for development of new projects and are used in the next planning phase
• Provides baselines for future projects
• The M&E results enhance project sustainability
• M&E results reports are used to make evidence based organizational decision

(Visited 10 times, 1 visits today)
Share this:

Written by