Outcome Evidencing: A Method for Enabling and Evaluating Program Intervention in Complex Systems

How do you monitor and evaluate programs where some of your outcomes only become clear after the project has already started? This was the question that Rodrigo Paz-Barnegaray and I set out to answer when we worked for the CGIAR Program on Aquatic Agricultural Systems. We needed an approach that could quickly make sense of how participants were starting to interpret and react to the progam’s research process and findings.

The approach we developed is called Outcome Evidencing, and is an adaptation of Ricardo Wilson-Grau’s Outcome Harvesting method.

MIT researcher holding a poster on outcome mapping
Outcome harvesting: graphic notes by @CarhartCreative on a presentation by Laura Budzyna at MIT TechCon 2016.

The American Journal of Evaluation have recently published our paper describing the approach, with the same title as this post. (See here for a post-print of the article (not for widespread dissemination), or write to me for a copy).

Outcome Evidencing is built on systems thinking, ideas from program evaluation and Michael Scriven’s Modus Operandi Method which he wrote about in 1976, saying that successful programs, like thieves, work in their own idiosyncratic ways. Borrowing from Snowden (2010), we assume that programs will spark or reinforce patterns of outcomes as people interpret and make sense of what the program is providing. We also assume that program outcomes will start to occur within discrete areas of change and that if the outcome patterns are similar across them, then this is evidence of program contribution, using the modus operandi argument. For example, in our AAS program, participatory action research led to increased mango production, the rehabilitation of abaca (a member of the banana family grown for fiber) and improvements in small-scale fisheries production in the Philippines. The process of carrying out participatory action research worked to build capacity to innovate in similar ways in all three areas of change (see here for a recent post for more on capacity to innovate).

ten steps in outcome evidencing from searching and describing outcomes to collective design evaluation in a workshop to carrying out the evaluation

Figure 1: Steps in Outcome Evidencing

The figure shows the different steps of Outcome Evidencing. After agreeing the evaluation questions, the next step is to identify program areas of change, something that people should be able to do two or three years after the program starts. Then, the job is to identify the program-induced outcomes and identify the patterns they make. This is done by asking local change agents to identify the outcomes, to cluster them, and then develop diagrams showing the causal links between them. Within the diagrams there are usually critical causal claims that need verification if the program can plausibly claim contribution to the overall pattern. We call these patterns outcome trajectories to capture the idea that they represent a stream of outcomes that have the potential to lead somewhere. Once verified, the program can then decide whether to stabilize and amplify the outcome trajectories, or dampen them down should they threaten to have adverse consequences for marginalized groups, or for women. The outcome trajectories can also be understood as theories of change that can be projected forwards to help decide if and how to support the trajectory, and what level of outcome might be expected.

The best way to identify the patterns is to ask the change agents to do it themselves in a workshop. They can also identify the key causal claims and where evidence exists to verify them.

Outcome Evidencing can be carried out as a one-off evaluation or repeated annually to revisit the already-identified outcome trajectories, and identify new ones. The latter adds detail to the overall project theory of change that will be useful during the final project evaluation, if carried out.

Leave a Comment

Your email address will not be published.