Tuesday 15 November 2011

Are evaluators too fixated on objectives? The case of conflict-affected states

By Chris Barnett


It is a while since I was co-facilitating a workshop outside Brussels on evaluations in conflict-affected situations. We were exploring alternative approaches to evaluation, building on recent experiences of multi-donor evaluations in South Sudan and the DRC. Previously I’ve raised the issue about whether theories of change have their limitations (see my earlier post on evaluation approaches). In the end, we spent much more time exploring the use and limitations of objectives in evaluation practice. Here is a brief summary of those discussions.

Objectives form the reference point for most mainstream (development) evaluations. Objectives underpin the OECD/DAC evaluation criteria. For instance:

  • Relevance: Are the programme objectives aligned to the situation?
  • Effectiveness: Were the programme objectives met?
In development evaluation,[1] we sometimes come across programmes that are radically re-designed during the mid-term. When this happens, the evaluator is faced with a dilemma: do you measure success against the original objectives, but risk undervaluing a genuinely flexible and responsive programme? Or, do you measure success against the re-designed objectives, but risk excusing an initially poor design?  The former often seems too harsh, the latter too lenient.

In conflict-affected situations, flexibility is a must and re-design occurs more often. Many things cannot be planned in advance, and managers respond to constraints and opportunities as they arise. Yet, being responsive to a dynamic context can undermine plans made three or five years previous. There are also other reasons why clearly planned objectives may be undermined in such fragile contexts: 
  • Terms such as peace and conflict prevention may not be well defined. Indeed, not all stakeholders may share the same understanding within a single programme (e.g. differing civilian and military perspectives). 
  • Peace may not be an explicitly-stated objective. The Utstein palette (PDF) is helpful in this regard, as it unpacks the full range of activities that may contribute to conflict prevention and peace-building. Yet for the evaluator, this all-encompassing view of peace-building activities presents a challenge; with interventions contributing to peace even when this is not stated.
  • Objectives may be politically-sensitive, and may be hidden or masked. For the ex post evaluator, a retrospective interpretation of such objectives opens up the real possibility of contested findings. This is especially so where the ‘evaluation space’ is highly contested, as was the case in the CPPB evaluation in SriLanka (PDF).
So, what provides an alternative point of reference?

Well, the perspectives of local people: their needs, and their understanding of what drives the conflict. The draft OECD Guidance on Evaluating Conflict Prevention and Peacebuilding Activities (currently under revision), provides an entry point for developing such an approach. It layers Conflict Analysis onto the traditional DAC evaluation approach, although it doesn’t fully integrate the two approaches. Instead, a more radical view would be to use the conflict analysis (especially an analysis by local people about what drives the conflict) as a reference point for evaluating the intervention. For instance:

  • Relevance: To what extent does the programme reflect a local understanding of the conflict?
  • Effectiveness: To what extent does the programme address the drivers of the conflict?
  • Impact: To what extent is the programme leading to reduced conflict, by building peace?
Some will argue that this shifts accountability away from programme managers. After all, managers should be accountable for their design, and therefore measured against the objectives they stated. But, this suggested approach is more about making managers accountable to their constituents (their target audience), and focuses on whether they are addressing local needs and factors behind the conflict.


[1] By development evaluation, I am simply referring to evaluations conducted in relatively stable developing contexts, in contrast to those undertaken in fragile or conflict-affected states.