Evaluating complex interventions: principles and recommendations derived from experience

This article summarizes a paper written by John Mayne, Cynthia McDougall, Rodrigo Paz-Ybarnegaray and myself recently published in the journal Evaluation.

There is a growing recognition that programs that seek to change people’s lives are intervening in complex systems, which puts a particular set of requirements on program monitoring and evaluation (M&E). Developing complexity-aware M&E systems within some organizations is difficult because they challenge traditional orthodoxy. Little has been written about the experience of doing so in a real program.  The paper fills this gap by describing the development of a complexity-aware evaluation approach in the CGIAR Research Program on Aquatic Agricultural Systems. We outline the design and methods used including trend lines, panel data, after action reviews, building and testing theories of change, outcome evidencing and realist synthesis. We identify and describe a set of design principles for developing complexity-aware M&E. These are:

  1. Stage the roll-out of the program’s M&E system. The M&E system needs to evolve with the program. The first priority is to put the monitoring and information management systems in place to meet basic accountability requirements. We found that regular after action reviews worked well to establish the practice of reflexive learning. After that building capacity and organizational structure can be staged according to when the program needs it. For example, the ability to monitor outcomes is needed when change starts to happen on the ground. Monitoring change before outcome pathways become clear can take resources away from fostering change in the first place.
  2. See the M&E system helping to achieve the program’s overall goals. Programs operating in complex settings require an M&E system to help them navigate that complexity. M&E efforts should be understood as a mainstay of the program’s ability to learn, identify threats and opportunities and adapt accordingly. Complexity-aware M&E should have its own theory of change to show how it contributes to program goals (see below).
  3. Support learning for adaptive management that feeds back into the annual planning cycle. The M&E system should produce insights and learning fast enough to help the program adapt to emerging opportunities, threats and unforeseen circumstances as they happen.
  4. Help the program meet its accountability requirements in a changing context. Unpredictability and emergence means that the pathways towards program goals, and the goals themselves, may change over time. This is a challenge to meeting pre-agreed outcome targets. A solution is to shift to being accountable for learning, to know if and how interventions are generating expected and unexpected outcomes so the program can respond appropriately to increase the likelihood of beneficial impact. A robust learning system should report on the nature of and progress along outcome pathways, deviations from prior targets, likelihood of future impact and improvements on targets based on what is being learned. The shift in emphasis is from being accountable to pre-determined outcomes to being accountable for generating outcomes whatever the might be, knowing about them and doing sensible things based on this knowledge.
  5. Be implementable by staff on the available budget. The M&E system must be practicable in terms of budget and the program’s ability to implement. M&E becomes a broader responsibility. Staff need become more comfortable with ambiguity, better able to engage with others, more flexible and able to respond to what is being learned while documenting what is important to support future learning. This requires a skill set that takes time and effort to build and links back to the need to stage the M&E system. Any evaluation approach of a program working across different sites will require adaptation to local context, capacities and budget which will not necessarily be straightforward.
  6. Contribute to the development of useful impact evaluation methods. Often more traditional counterfactual approaches to impact evaluation in complex settings are not appropriate or feasible, and new approaches need to be adapted or developed. New approaches should be communicated to the evaluation community so that peer review can build the credibility and visibility of practical and useful complexity-aware approaches.

Based on these principles, and the experience of developing a complexity-aware M&E system for AAS, the authors identify four recommendations for programs wishing to do the same.

graphic representation of recommendations for complexity aware Monitoring and Evaluation systems
  1. Develop a ToC for program M&E. It will help make the case for investment in program M&E by showing how M&E is expected to contribute to program learning and goals. It will also help those involved in M&E to reflect on the relevance of their own practice (see figure above).
  2. See evaluation contributing to building reusable theory. body of program theory. Programs should test their ToC so as to build a stock of recyclable mid-range theories that apply to different classes of intervention. The potential is that evaluations will reuse, add and refine the matching theory, rather than building a ToC from scratch as normally happens now.
  3. Make the case for theories of change that capture complex dynamics. Complexity-aware evaluation has the opportunity to build empirical evidence and understanding as to how program interventions can trigger emergence and positive feedback loops in practice. In this way, the recyclable mid-level theories just mentioned can show how relatively small development investments can lead to large-scale impact. Doing this in practice involves identifying critical parts of an overall program ToC where non-linear responses are expected and then developing more detailed ‘nested’ theories of change for these.
  4. Clarify and engage with broader system expectations. Most programs operate as part of a broader system upon which they depend in part for legitimacy, funding and recognition. Choice of evaluation methods and use of evaluation findings may lead to tension between the program and the broader system, especially when the program is engaging in new and innovative approaches to deal with complex settings. Advocates of new approaches must recognize the need to explain them, prove their worth and show where they fit into the broader system.

Leave a Comment

Your email address will not be published.