Skip to main content
M&E Studio
Home
AI for M&E
GuidesPromptsPluginsInsights
Resources
Indicator LibraryReference LibraryDownloadsME Library
Services
About
M&E Studio

AI for M&E, Built for Practitioners

About

  • About Us
  • Contact
  • Insights
  • LinkedIn

Services

  • Our Services
  • Tools

AI for M&E

  • Workflows
  • Plugins
  • Prompts
  • AI Course

M&E Library

  • Browse Library
  • Indicators
  • Reference
  • Downloads

Legal

  • Terms
  • Privacy
  • Accessibility

© 2026 Logic Lab LLC. All rights reserved.

M&E Reference Library

View Full Glossary
ENFRES

Your comprehensive guide to monitoring, evaluation, accountability, and learning

144 entries

Methods
In-Depth Guide

Contribution Analysis

A structured approach to building a credible case for how and why a programme contributed to observed outcomes, without requiring experimental attribution.

31 indicators
Methods
In-Depth Guide

Developmental Evaluation

An evaluation approach designed for complex, adaptive programmes in which goals and processes are emergent, and the evaluator works alongside the programme team as an embedded learning partner.

16 indicators
Methods
In-Depth Guide

Impact Evaluation

A rigorous evaluation approach that measures the causal effect of a programme on outcomes by comparing what happened with what would have happened in its absence.

52 indicators
Frameworks
In-Depth Guide

Logframe / Logical Framework

A structured matrix that summarizes a project's design, linking activities to expected results through a clear hierarchy of objectives with indicators, verification sources, and assumptions.

23 indicators
Methods
In-Depth Guide

Most Significant Change

A participatory qualitative monitoring approach that systematically collects and selects stories of change to identify and share the most significant outcomes of a programme.

23 indicators
Methods
In-Depth Guide

Outcome Harvesting

A retrospective evaluation approach that identifies, verifies, and analyses outcomes that have occurred, then determines whether and how the programme contributed to them.

23 indicators
Frameworks
In-Depth Guide

Outcome Mapping

A participatory planning and monitoring approach that tracks behaviour changes in the people, groups, and organisations a programme works with directly, rather than long-term development outcomes.

22 indicators
Methods
In-Depth Guide

Participatory Evaluation

An evaluation approach that actively involves stakeholders and beneficiaries throughout all stages, from design through use of findings, ensuring local ownership and relevance.

23 indicators
Methods
In-Depth Guide

Process Tracing

A within-case method for causal inference that tests whether the causal mechanisms predicted by a theory of change actually operated in a specific case, using systematic evidence to evaluate causal claims.

12 indicators
Methods
In-Depth Guide

Quasi-Experimental Design

A family of evaluation designs that estimate causal programme effects without random assignment, using statistical methods to construct credible comparison groups.

38 indicators
Methods
In-Depth Guide

Realist Evaluation

An evaluation approach that asks what works, for whom, in what circumstances, and why, by identifying the mechanisms through which programmes produce outcomes in specific contexts.

18 indicators
Frameworks
In-Depth Guide

Results Framework

A structured collection of indicators organized by results level that tracks programme performance across a portfolio, focusing on what changed rather than what was delivered.

38 indicators
Frameworks
In-Depth Guide

Results-Based Management

A management approach that focuses organisational decisions, resources, and accountability on achieving defined results, using evidence from monitoring and evaluation.

44 indicators
Frameworks
In-Depth Guide

Theory of Change

A structured explanation of how and why a set of activities is expected to lead to desired outcomes, mapping the causal logic from inputs to impact.

47 indicators
Methods
In-Depth Guide

Utilization-Focused Evaluation

An evaluation approach where every design decision is driven by the needs of the primary intended users, the specific people who will actually use the findings to make specific decisions.

24 indicators
Cross-Cutting
Overview

Accountability Mechanisms

The systems, processes, and structures that enable organisations to answer to stakeholders, including communities, donors, and partners, for their performance, decisions, and use of resources.

41 indicators
Planning
Overview

Adaptive Management

A management approach that uses continuous learning from monitoring and evaluation data to adjust programme strategies and activities in response to changing evidence or context.

29 indicators
Data Collection
Overview

Baseline Design

A structured approach to collecting initial condition data that directly informs project decisions, minimizes burden, and enables valid comparison with endline measurements.

18 indicators
Learning
Overview

Capacity Building for M&E

The process of strengthening the knowledge, skills, systems, and resources that organisations and individuals need to design, implement, and use monitoring and evaluation effectively.

17 indicators
Evaluation
Overview

Cost-Effectiveness Analysis

A systematic approach to comparing the costs and outcomes of alternative interventions to identify which delivers the best value for money in achieving specific objectives.

23 indicators
Data Collection
Overview

Data Collection Burden

The total time, effort, and resources required from respondents and implementers to complete data collection activities, balanced against data quality needs and programme capacity.

12 indicators
Data Collection
Overview

Data Management

The systematic processes for collecting, storing, securing, and maintaining data quality throughout the data lifecycle to ensure information is accurate, accessible, and usable for decision-making.

12 indicators
Data Collection
Overview

Data Quality Assurance

A systematic process for verifying that collected data meets five quality dimensions, Validity, Integrity, Precision, Reliability, and Timeliness, ensuring data is fit for decision-making.

18 indicators
Learning
Overview

Data Visualization for M&E

The strategic use of charts, dashboards, and infographics to communicate monitoring data to diverse stakeholders, transforming raw numbers into actionable insights for decision-making.

23 indicators
Indicators
Overview

Disaggregation

The breakdown of aggregate data by sub-group characteristics, such as sex, age, location, or vulnerability status, to reveal inequities and differences in programme reach and outcomes.

47 indicators
Cross-Cutting
Overview

Do No Harm

The foundational M&E principle that programme and evaluation activities must not expose participants, communities, or programme staff to physical, psychological, social, or economic harm, and must actively identify and mitigate harm risks before they occur.

14 indicators
Cross-Cutting
Overview

Ethics in M&E

The principles and standards that guide the ethical conduct of monitoring and evaluation, protecting the rights and dignity of participants, ensuring honest reporting, and managing power responsibly.

28 indicators
Evaluation
Overview

Evaluation Criteria (DAC)

The OECD-DAC framework provides five standard criteria, relevance, efficiency, effectiveness, impact, and sustainability, for systematically assessing the merit and value of development interventions.

12 indicators
Evaluation
Overview

Evaluation Matrix

A structured mapping document that links each evaluation question to its data sources, collection methods, indicators, and analysis approach, the operational blueprint for executing an evaluation.

8 indicators
Evaluation
Overview

Evaluation Terms of Reference

A formal document that defines the scope, objectives, methodology, and requirements for an evaluation, serving as the primary contract between the commissioning organization and the evaluation team.

12 indicators
Data Collection
Overview

Focus Group Discussions

A qualitative data collection method that brings together 6-10 participants to discuss a specific topic, generating rich insights through group interaction and shared experiences.

Cross-Cutting
Overview

Gender-Responsive M&E

An approach to monitoring and evaluation that systematically examines how programmes affect women, men, girls, and boys differently, and ensures that M&E processes themselves do not reinforce gender inequalities.

19 indicators
Indicators
Overview

Indicator Selection & Development

The systematic process of choosing and refining performance indicators that are specific, measurable, achievable, relevant, and time-bound to track programme progress effectively.

34 indicators
Data Collection
Overview

Key Informant Interviews

In-depth, semi-structured interviews with individuals selected for their specific knowledge, experience, or perspectives relevant to the evaluation questions.

19 indicators
Learning
Overview

Knowledge Management for M&E

The systematic process of capturing, organising, and applying lessons, evidence, and insights from M&E across programmes and over time to improve organisational decision-making.

21 indicators
Learning
Overview

Learning Agendas

A structured set of priority learning questions that guide systematic inquiry throughout programme implementation, turning monitoring data into actionable knowledge for decision-making.

12 indicators
Planning
Overview

M&E Plans

A detailed operational document that translates your logframe and theory of change into actionable M&E requirements, specifying what data to collect, when, from whom, and how it will be used.

23 indicators
Planning
Overview

M&E System Design

A structured approach to building the organizational infrastructure, processes, and capacities needed to collect, analyze, and use M&E data for decision-making throughout a programme's life.

23 indicators
Evaluation
Overview

Mixed Methods Evaluation

An evaluation approach that systematically combines quantitative and qualitative data to provide a more complete understanding of programme effects, mechanisms, and context.

23 indicators
Planning
Overview

Needs Assessment

A systematic process for identifying and analyzing gaps between current conditions and desired outcomes, establishing the evidence base for programme design and indicator selection.

23 indicators
Data Collection
Overview

Observation Methods

A systematic approach to collecting data by directly watching and recording behaviours, interactions, and processes as they occur in natural settings.

12 indicators
Indicators
Overview

Proxy Indicators

Indirect measures used when direct measurement of the intended outcome is impossible, impractical, or too costly, requiring careful validation to ensure they accurately represent the target construct.

12 indicators
Learning
Overview

Reporting Best Practices

The principles and practices for producing evaluation and monitoring reports that are clear, credible, actionable, and tailored to their intended audiences.

23 indicators
Evaluation
Overview

Rubric-Based Assessment

A structured evaluation approach using predefined criteria and performance levels to systematically assess programmes, projects, or interventions against established standards.

12 indicators
Data Collection
Overview

Sampling Methods

Systematic approaches for selecting a subset of a population to represent the whole, balancing statistical validity with practical constraints.

12 indicators
Indicators
Overview

SMART Indicators

A quality framework for designing indicators that are Specific, Measurable, Achievable, Relevant, and Time-bound, ensuring they provide reliable, actionable data for decision-making.

23 indicators
Planning
Overview

Stakeholder Analysis

A structured process for identifying all parties with an interest in a programme, mapping their roles, influence, and information needs, and informing how M&E should engage them.

17 indicators
Data Collection
Overview

Survey Design

The process of designing structured questionnaires and survey protocols to collect reliable, valid, and actionable data from a defined population.

34 indicators
Indicators
Overview

Target Setting

The process of establishing specific, time-bound performance benchmarks against which programme progress and achievement will be measured.

32 indicators
Cross-Cutting
Overview

Value for Money

The optimal balance of cost, quality, and outcomes, achieving the best results for the resources invested, assessed through the 4Es: economy, efficiency, effectiveness, and equity.

23 indicators
Cross-Cutting
Quick Reference

Accountability

The responsibility to be transparent, report, and respond to stakeholders about programme performance and decisions.

Evaluation
Quick Reference

Accountability Evaluation

An evaluation focused on assessing whether a programme is meeting its obligations to stakeholders, including donors, beneficiaries, and regulatory bodies.

Frameworks
Quick Reference

Activity

What a programme DOES with its inputs to produce outputs; the direct work or services delivered.

Learning
Quick Reference

After-Action Review

A structured, time-bound reflection process conducted immediately after a specific activity or milestone to capture what was planned, what happened, why the difference, and what should change.

5 indicators
Planning
Quick Reference

Assumptions

Conditions outside programme control that must hold true for the programme to succeed as planned.

Methods
Quick Reference

Attribution vs Contribution

The distinction between proving a programme directly caused outcomes (attribution) versus building a credible case that it contributed to outcomes alongside other factors (contribution).

Evaluation
Quick Reference

Audit Evaluation

An evaluation focused on assessing financial probity, internal controls, and compliance with financial regulations and procurement standards.

Evaluation
Quick Reference

Audit vs Evaluation

Audits examine financial and regulatory compliance; evaluations assess programme effectiveness and impact.

Indicators
Quick Reference

Baseline

Initial conditions data collected at the start of a project to establish a reference point for measuring change and setting indicator targets.

18 indicators
Indicators
Quick Reference

Benchmark

A reference point or standard value used to measure progress, typically derived from historical data, industry standards, or comparable programmes.

12 indicators
Cross-Cutting
Quick Reference

Beneficiary

A person, household, or organisation that receives direct benefits from a programme's activities or outputs.

Methods
Quick Reference

Beneficiary Feedback

Systematic collection and use of input from programme beneficiaries about their experiences, needs, and priorities to improve accountability and programme relevance.

12 indicators
Methods
Quick Reference

Bias

Systematic error in data collection, analysis, or interpretation that distorts results and threatens the validity of M&E findings.

12 indicators
Learning
Quick Reference

Capacity Strengthening

The process of developing skills, systems, and relationships that enable individuals and organizations to achieve their development goals sustainably.

12 indicators
Methods
Quick Reference

Causal Inference

The process of determining whether an intervention caused observed outcomes by establishing a credible counterfactual and ruling out alternative explanations.

18 indicators
Data Collection
Quick Reference

Census vs Sample

The choice between measuring every unit in a population (census) versus selecting a subset (sample) determines cost, precision, and what inferences you can make about your programme.

Learning
Quick Reference

CLA (Collaborating, Learning, and Adapting)

USAID framework for integrating collaboration, learning, and adaptation into programme design and management.

Data Collection
Quick Reference

Cluster Sampling

A sampling method that divides the population into clusters and randomly selects entire clusters rather than individuals.

Cross-Cutting
Quick Reference

Communication Strategies

Intentional approaches to sharing M&E findings and programme information with stakeholders to influence decisions, build accountability, and promote learning.

Evaluation
Quick Reference

Compliance Evaluation

An evaluation focused on assessing whether a programme adheres to legal, regulatory, donor, and organizational requirements and standards.

12 indicators
Evaluation
Quick Reference

Compliance Monitoring

Tracking whether a programme is implemented according to agreed standards, policies, and legal requirements.

Indicators
Quick Reference

Composite Indicator

A composite indicator combines multiple individual indicators into a single index or score, enabling measurement of multidimensional concepts that cannot be captured by a single metric.

Methods
Quick Reference

Confounding Variables

Extraneous variables that correlate with both the intervention and the outcome, creating spurious associations that threaten causal inference in evaluation.

8 indicators
Methods
Quick Reference

Content Analysis

A systematic approach to analysing communication content, identifying patterns, themes, and biases in text, audio, or video data through structured coding.

Learning
Quick Reference

Continuous Improvement

A systematic, ongoing approach to enhancing programme performance through iterative learning, feedback, and adaptation.

12 indicators
Methods
Quick Reference

Counterfactual

The comparison between what happened and what would have happened in the absence of an intervention, the fundamental basis for establishing causal attribution in impact evaluation.

12 indicators
Indicators
Quick Reference

Custom vs Standard Indicators

The choice between donor-provided standard indicators and programme-specific custom indicators, balancing compliance requirements with contextual relevance.

12 indicators
Learning
Quick Reference

Dashboard

A visual display of key monitoring indicators enabling rapid assessment of programme performance at a glance.

Learning
Quick Reference

Dissemination

Active, intentional process of sharing M&E findings with relevant audiences to promote understanding, learning, and evidence use.

Learning
Quick Reference

Donor Reporting

The process of systematically communicating programme progress, results, and financial information to funding organizations according to their specific requirements and timelines.

12 indicators
Planning
Quick Reference

Donor Requirements

M&E obligations specified in grant agreements and donor policies that shape system design and reporting.

Methods
Quick Reference

Empowerment Evaluation

A self-evaluation approach where programme participants systematically assess their own work to improve programmes and secure future ownership.

12 indicators
Indicators
Quick Reference

Endline

A final data collection point at programme completion that measures achieved outcomes against baseline and target values.

12 indicators
Evaluation
Quick Reference

Evaluability Assessment

A preliminary review of whether a programme is sufficiently mature and documented to be meaningfully evaluated.

Evaluation
Quick Reference

Evaluation Questions

The overarching questions an evaluation will investigate, distinct from survey or interview questions.

Methods
Quick Reference

Evidence Synthesis

The systematic process of identifying, selecting, and integrating findings from multiple studies to inform programme design, evaluation, and decision-making.

Learning
Quick Reference

Evidence-Based Decision Making

Using M&E evidence to inform programme, management, and policy decisions rather than intuition or habit.

Evaluation
Quick Reference

Ex-Ante vs Ex-Post Evaluation

The temporal dimension of evaluation, ex-ante occurs before implementation to inform design, while ex-post occurs after completion to assess outcomes and lessons.

8 indicators
Learning
Quick Reference

Feedback Loop

A structured process for collecting, analysing, and acting on information to improve programme performance and outcomes.

Evaluation
Quick Reference

Formative vs Summative Evaluation

Formative evaluation improves programmes during implementation; summative evaluation judges their overall merit after completion.

12 indicators
Methods
Quick Reference

Impact

Long-term, higher-level effects attributable or contributed to by a programme; broader change beyond individual outcomes.

Cross-Cutting
Quick Reference

Impact Stories

Narrative accounts that illustrate how a programme has influenced the lives of beneficiaries, combining quantitative outcomes with qualitative human experience.

Evaluation
Quick Reference

Inception Report

The first formal deliverable from an evaluation team, detailing refined methodology before primary data collection.

Indicators
Quick Reference

Indicator

A specific, observable, measurable variable that tracks progress toward an outcome or output.

Learning
Quick Reference

Indicator Reporting

The systematic collection, compilation, and presentation of indicator data to track programme performance and communicate results to stakeholders and donors.

Frameworks
Quick Reference

Input

Resources invested in a programme (money, staff, materials, time) that enable activities to happen.

Frameworks
Quick Reference

Intervention Logic

The causal chain connecting programme activities to intended outcomes, showing how and why a set of interventions is expected to lead to desired change.

Learning
Quick Reference

Knowledge Sharing

The deliberate practice of capturing, organizing, and distributing insights, lessons, and best practices across teams and organizations to improve programme performance and avoid repeating mistakes.

12 indicators
Learning
Quick Reference

Learning

The systematic process of gathering evidence, reflecting on it, and using it to improve programme strategy and implementation.

Learning
Quick Reference

Learning Cycles

Structured, recurring periods of reflection and adaptation where programme teams review data, draw lessons, and adjust implementation accordingly.

12 indicators
Learning
Quick Reference

Lessons Learned

Documented insights from programmes identifying what worked, what did not work, and why, with actionable specificity.

Methods
Quick Reference

Literature Review

A systematic, critical synthesis of existing research on a specific topic, identifying what is known, gaps in knowledge, and evidence for programme design.

Methods
Quick Reference

LQAS

Logical Quality Assessment Sampling is a rapid decision-making method that classifies programs or areas as pass/fail against a threshold, commonly used for health program monitoring.

12 indicators
Planning
Quick Reference

M&E Budget

The portion of a programme budget dedicated to monitoring, evaluation, and learning activities.

Planning
Quick Reference

M&E Framework

The structured document specifying what will be measured, how, by whom, and how often.

Evaluation
Quick Reference

Meta-Evaluation

The systematic evaluation of an evaluation's quality, assessing whether it met professional standards and produced credible, useful findings.

8 indicators
Data Collection
Quick Reference

Midline

A data collection point conducted midway through a programme to assess trajectory and enable adaptive decisions.

Indicators
Quick Reference

Milestone

A significant intermediate checkpoint or event that signals progress toward a target, used to track whether a programme is on schedule to achieve its intended outcomes.

12 indicators
Cross-Cutting
Quick Reference

Monitoring vs Evaluation

Monitoring is the continuous, systematic tracking of programme activities and outputs; evaluation is the periodic, in-depth assessment of outcomes, impact, and causal attribution.

Learning
Quick Reference

Narrative Reporting

Qualitative, story-based reporting that contextualizes quantitative indicators with explanations of what happened, why it happened, and what it means for programme learning and decision-making.

12 indicators
Learning
Quick Reference

Organisational Learning

The systematic process by which an organisation captures, analyses, and applies lessons from experience to improve programme performance and decision-making.

12 indicators
Frameworks
Quick Reference

Outcome

Changes in behaviour, knowledge, skills, or conditions resulting from programme outputs, experienced by beneficiaries.

Methods
Quick Reference

Outcome-Level Analysis

The systematic examination of outcomes to determine whether a programme achieved its intended results, distinguishing between expected and unexpected outcomes, and assessing the significance and sustainability of changes observed.

12 indicators
Frameworks
Quick Reference

Output

Direct, tangible products of programme activities; what the programme produces, not what beneficiaries gain.

Methods
Quick Reference

Participatory M&E

An approach to monitoring and evaluation that actively involves stakeholders, especially beneficiaries, at every stage, from design through to using findings for decision-making.

12 indicators
Learning
Quick Reference

Performance Dashboards

Visual management interfaces that display key performance indicators in real-time, enabling programme teams and stakeholders to monitor progress, identify issues, and make data-driven decisions.

12 indicators
Evaluation
Quick Reference

Performance Evaluation

An assessment of how well a programme or organisation is achieving its intended results and operating efficiently against established standards and targets.

12 indicators
Planning
Quick Reference

Performance Management

The systematic use of monitoring data, evaluation findings, and feedback to guide programme decisions, improve results, and ensure accountability to stakeholders.

12 indicators
Methods
Quick Reference

Primary vs Secondary Data

Primary data is collected firsthand for a specific purpose; secondary data is existing data repurposed for new analysis. Each has distinct trade-offs in cost, timeliness, and relevance.

Evaluation
Quick Reference

Process Evaluation

Assessment of how a programme is implemented, whether activities are delivered as planned and to intended quality standards.

Frameworks
Quick Reference

Programme Theory

The explicit articulation of how a programme is expected to produce change.

Learning
Quick Reference

Progress Report

A periodic document submitted by programmes to donors detailing implementation progress, indicator performance, and key issues.

Data Collection
Quick Reference

Purposive Sampling

A non-probability sampling approach where researchers deliberately select participants based on specific characteristics or knowledge relevant to the research objectives.

Data Collection
Quick Reference

Qualitative Data

Non-numerical information captured through words, images, or observations that reveals the how and why behind programme outcomes, providing depth and context to quantitative findings.

12 indicators
Data Collection
Quick Reference

Quantitative Data

Numerical data collected through structured measurement, enabling statistical analysis, generalization, and objective comparison across programmes and contexts.

12 indicators
Methods
Quick Reference

Random Sampling

A probability sampling method where every member of the population has an equal, known chance of selection, enabling statistical inference to the broader population.

12 indicators
Methods
Quick Reference

Randomised Controlled Trial

An experimental evaluation design that randomly assigns participants to treatment and control groups to establish causal attribution between an intervention and observed outcomes.

12 indicators
Methods
Quick Reference

Rapid Assessment

A condensed data collection approach designed to generate actionable insights quickly, typically using streamlined qualitative and quantitative methods in time-constrained contexts.

8 indicators
Evaluation
Quick Reference

Real-Time Evaluation

An evaluation approach conducted during programme implementation to provide immediate feedback for adaptive management and mid-course corrections.

8 indicators
Planning
Quick Reference

Real-Time Monitoring

The continuous collection and analysis of data during programme implementation to enable rapid detection of issues and timely corrective action.

Learning
Quick Reference

Reflection Sessions

Structured gatherings where programme teams and stakeholders pause to examine what happened, why it happened, and what should change as a result.

8 indicators
Methods
Quick Reference

Reliability

The consistency and repeatability of a measurement, whether the same tool produces stable results across repeated applications, different raters, or different time periods.

12 indicators
Frameworks
Quick Reference

Results Chain

The sequential hierarchy of change from activities through outputs, outcomes, and impact that shows how a programme is expected to create change.

12 indicators
Planning
Quick Reference

Risks and Risk Mitigation

External factors that could prevent programme success and their planned mitigation strategies.

Planning
Quick Reference

Scope of Work

A document specifying what an evaluator or consultant will deliver, within what timeframe, budget, and constraints.

Methods
Quick Reference

SROI (Social Return on Investment)

Evaluation framework that assigns monetary values to social outcomes to calculate return on investment.

Methods
Quick Reference

Statistical Significance

A statistical measure indicating whether observed results are likely due to a real effect rather than random chance, typically assessed using p-values and hypothesis testing.

Cross-Cutting
Quick Reference

Storytelling for Impact

The strategic use of narrative to make M&E findings memorable, actionable, and influential for decision-makers and stakeholders.

Evaluation
Quick Reference

Sustainability Evaluation

Assessment of a programme's continued benefits and functionality after external funding has ended, examining whether outcomes persist and systems remain operational.

12 indicators
Evaluation
Quick Reference

Systematic Review

A rigorous, structured approach to identifying, appraising, and synthesizing all available evidence on a specific evaluation question using explicit, reproducible methods.

Indicators
Quick Reference

Target

The specific value an indicator is expected to reach by a defined date, quantifying what success looks like.

Methods
Quick Reference

Thematic Analysis

A systematic method for identifying, analyzing, and reporting patterns (themes) in qualitative data through coding and categorization.

Methods
Quick Reference

Triangulation

Using multiple data sources, methods, or perspectives to cross-verify findings and strengthen the validity of evaluation conclusions.

Methods
Quick Reference

Validity (Internal & External)

The degree to which an evaluation accurately demonstrates causal relationships (internal validity) and generalizes findings beyond the study context (external validity).

12 indicators

Continue Exploring

💬

AI Prompt Library

130+ tested M&E prompts

🤖

AI for M&E Guides

Practical how-to workflows

📥

ME Library

Downloads and indicators