January 15, 2016

Program Evaluation

Conducting studies to determine a program's impact, outcomes, or consistency of implementation (e.g. randomized control trials).


Program evaluations are periodic studies that nonprofits use to determine the effectiveness of a program. These studies are typically run by independent third parties or evaluation experts, and fall into two broad categories:

  • Implementation studies, designed to prove a program is being implemented as intended
  • Impact studies, designed to establish whether a program is generating desired effects. Impact studies have various levels of rigor, the highest of which uses randomized control trials to establish causality.

Note: Program evaluation is distinct from performance measurement, which is an ongoing organizational process aimed at learning and improving. Performance measurement is also typically conducted by an organization's own staff.

How program evaluation is used

Typically, program evaluation is conducted after a program has undergone years of testing and refining using internal approaches. Once a nonprofit has strong internal evidence that its program is consistently effective, it can hire an objective third party to conduct a program evaluation. This will help prove that the program is having the desired effects. The higher level of proof from program evaluation can help a nonprofit attract resources to scale its program to reach more beneficiaries.It can also help advocate for changes in public policy or public funding streams.

Additionally, a nonprofit can use program evaluations to test variations on its model—e.g., longer vs. shorter dosage, delivery by more vs. less skilled staff, in-person vs. virtual models.

Depending on their design, program evaluations can help nonprofits:

  • Prove that a program is producing a positive result, i.e., that it works
  • Quantify the benefits of a program to individuals or society and calculate the cost per outcome or social return on investment
  • Demonstrate which types of participants are most or least likely to benefit from a particular program
  • Isolate which elements are most or least important to a program's success
  • Establish whether programs are being consistently implemented, with fidelity to a predetermined model or standard

The methodology of program evaluations

Before launching a program evaluation, nonprofits spend years investing in continuously improving their program model. They consistently track data on key program inputs (e.g., beneficiary demographics), outputs (e.g., program participation and completion), and, to the extent possible, outcomes and impacts (e.g., knowledge gained, changes in family income). They then use this to refine and strengthen their program. Once an organization has made sufficient use of internal methods to show that its program works, it may consider commissioning a program evaluation. There are five key steps to a strong program evaluation study:

  1. Define the key questions: Define what questions the evaluation will need to answer. Will participants be compared to a control group of nonparticipants, or will two different program model variations be compared to each other? Interviewing and selecting a third-party evaluator (e.g., university researchers, individual experts, or specialist firms) can help raise and clarify key questions.
  2. Design the evaluation: Together with the third-party evaluator, design a rigorous study to answer the key questions as efficiently and affordably as possible. Different questions and program models lend themselves to different evaluation methods (e.g., randomly assigning participants to different groups or doing pre/post comparisons). Longer study duration and larger sample sizes allow higher levels of confidence in the results, but also increase expenses.
  3. Conduct the study: Conduct the evaluation study. The evaluator may collect and track all necessary data during the study period, or the nonprofit's internal systems and staff may be part of the process.
  4. Analyze the results: Analyze the data to answer the key questions and reveal additional insights. If the program evaluation showed high levels of effectiveness and impact, seek ways to build upon this success (e.g., strengthening or expanding the program, publicizing results to seek additional funding). If the results were unclear or negative, discuss potential causes and remedies (e.g., evaluation design changes, program model changes).
  5. Improve: Implement changes to strengthen both the program and the nonprofit.

Related topics

Additional resources

How to Build a Nonprofit Dashboard for Your Leadership Team

How Nonprofits Can Incorporate Equity into Their Measurement, Evaluation, and Learning

Abdul Latif Jameel Poverty Action Lab Executive Training: Evaluating Social Programs 2009 (MIT OpenCourseWare)
These online course materials include lecture notes, case studies, exercises, and videos explaining how to evaluate social programs.

Seven Deadly Sins of Impact Evaluation
This article cites seven potential pitfalls that can occur when organizations attempt to measure the impact of their program models.

State of Evaluation
Innonet's semi-annual report summarizes how nonprofit organizations are evaluating their programs, what they are doing with the results, and whether they believe they have the necessary resources and skills to implement evaluation processes.


Creative Commons License logo
This work is licensed under a Creative Commons Attribution 4.0 International License. Permissions beyond the scope of this license are available in our Terms and Conditions.