Monday, 24 October 2011

Beyond happy sheets: outcome-focused event evaluation

By Penelope Beynon

Since joining the knowledge for development sector in June last year, I have participated in no less than 2 international conferences, 3 regional workshops and a host of cross-organisational meetings (and sent apologies for three times as many of each). Some cost money (for international or intercity travel), all have opportunity costs (being here instead of there) and they all cost time. 

As a participant, I find there is something innately attractive and energising about being together in a room with experts and peers that just cannot be simulated through online alternatives; but as a taxpayer I can’t quite shake that uncomfortable question – was it worth it?

In my role as M&E advisor I am occasionally asked how to evaluate events – while I haven’t yet found a tried and tested method that fits every event, I thought I’d share a few things I have learnt along the way.

With a few notable exceptions (e.g. A Process and Outcomes Evaluation of the International AIDS Conference, Lalonde et al 2007), most organisers fail to evaluate their events beyond a cursory feedback form that gauges audience satisfaction (commonly referred to as a ‘happy sheet’). But, if an organiser did want to push their evaluation to a new level and address the ‘uncomfortable’ question of worth – where would they begin?

In its most simplistic form, I propose that a worthwhile event evaluation needs to gather three types of information:
  • Costs 
  • Outcomes 
  • Reasonable alternatives

The full financial cost of events is rarely included in evaluation

The table below shows a summary of some areas where events incur costs. Unsurprisingly few organisers publish even the full financial costs of their events (grey box) or even add up their own financial and time costs (grey + purple boxes) for purposes of evaluation, let alone start to consider the sectoral costs of their event to participants and contributors.



Focusing on desired outcomes
Learning events may benefit all of these groups (P. Beynon, IDS)

1.    Spread your net wide when looking for outcomes

A common short coming of most outcome-focused event evaluations that I have unearthed (of which there are few to begin with) is a narrow concept of where benefits will occur and an almost exclusive focus on participants as the subjects for evaluation. Just as there are at least three groups who can incur costs for an event, these same groups could feasibly incur benefits (see diagram).



2.    Tailor your evaluation tools to match desired outcomes

Like all interventions, face-to-face events do not happen in isolation, they are usually part of a wider set of strategies intended (implicitly or explicitly) to contribute in some way to a programme's overall theory of change. Unfortunately, more often than not this link is not properly explored and event objectives read like either a) a less-than-ambitious list of activities, or b) an overly ambitious set of development aspirations well beyond anything the event could possibly deliver. Work closely with organisers to get to flesh out their theory of change and to situate the conference objectives within the wider programme context - then you will be able to tailor your evaluation tools to match the desired outcomes. While some organisers are coming up with interesting tools and approaches for outcome-focused event evaluation (e.g. network mapping (PDF), 3-test self-assessment) which I explore along with a few of our own attempts in a forthcoming ILT Practice In-Brief paper, most still limit their data sources to attendance records and the standard ‘happy sheet’.

3.    Follow through on your follow up!

The biggest limitation for most event evaluations is a lack of meaningful follow up. Change takes time, and unless you follow up with participants when they are back in their workplace you will only be able to capture intended behaviour change or the initial step towards an extended network. Be disciplined – schedule event follow ups for 3, 6 even 12 months after the fact.

Is there a cheaper way to achieve the same outcomes?
Well, this really is the million dollar question, and without a clear picture of our costs and benefits it just cannot be answered. But when you do have this level of information for one event, you will be able to start comparing that event with another and maybe even progress on to comparing all your face to face events with other strategies that use different tactics to achieve similar aims: such as ongoing rather than one-off events; online rather than face to face convening; 1 to 1 rather than convened events...

To conclude
As the saying goes, “If it’s worth doing at all it is worth doing properly” - so I urge organisers to go beyond ‘happy sheets’ and really scrutinise the worth of their events for their own sake and for the sake of the sector.

Friday, 14 October 2011

Early headlines from research on policy makers and ICTs: "persistent and curious enquirers" (with smartphones)

By Simon Batchelor

Just to keep you up to date on the country studies that I mentioned in my first blog….(in which I spoke about research we were conducting on policy makers and their use of ICTs).... a lot of data is in. Some countries found it easier to get interviews with senior policy makers than others, so some countries have still to deliver their full quota.

However we have now begun analysis and we begin to find some interesting headlines. As I write, my colleagues Jon Gregson and Manu Tyagi are presenting some headlines back to a portion of the intermediary sector in India and Nepal, and Chris Barnett presented last week in Ghana.  I would like to acknowledge the work of our partners ODC Inc in Nepal, and Delink Services  in Ghana.

So what are some of those headlines?

We will upload the slideshare soon, but in brief here are some of the things that attracted my attention:-

Policy actors have access to ICT, and a considerable number of them have smartphones, and to my mind more importantly, know how to use them!

Image from: http://bestsmartphone2011.info/

Of course they almost all have access to computers and the internet, and cellphones.  But Ghana 52%, Nepal 49% and North India 35% of the samples have smartphones.  In Ghana 25% had more than one smartphone!  And of those that have a smartphone, almost all in Ghana and Nepal have explored sending emails, surfing in the internet on the phone, recording a video and instant messaging.  Only in North India did it seem that there were a significant portion of people who had a smartphone and yet did not explore these ‘features’ (about 50%).

What does this mean to us in the intermediary sector? It suggests that if you are developing an app to push research into the policy environment, then the baseline of smartphone use is there.

Policy actors are surfing the internet themselves – the idea that policy makers wait for an assistant to brief them seems to be diminishing.  

In all three countries, the majority of policy makers agreed with statements surrounding their own use of ICT and surfing the internet.  They described themselves as ‘a "persistent and curious" enquirer’ and noted that they ‘often "discover" other relevant information when searching’ (Phrases used by the PEW Internet studies in USA).  They also agreed to a lesser extent with ‘I tend to get my briefings face to face officially, in meetings’.  In Ghana, where there was a significant portion of private sector executives, there were a significant number who actually disagreed with the idea that they got their information from ‘official briefings’.

What does this mean to us in the intermediary sector? It suggests that policy actors are looking for information themselves, and, I presume, therefore need to find it easily, in an accessible form, and I guess, quickly.  

Yes, I know that searching for information online is evolving, and that social networks now tend to push information within the network.  This changes the way those of us who are well connected get our information.  We did investigate whether the policy actors are connected to social media networks and to some extent looked at their searching behaviours, but we are not there yet in the analysis to be able to comment on it.  Watch this space.

Policy actors do have an appetite for research – or at least they say they do 

There was a consistent strong agreement with the need for facts and figures, and that these need to be up to date.  We explored what information they were actually looking for and we looked at whether they trust the sources and channels for the information.  Again, these details will come out as the analysis proceeds.  However there was an interesting difference between the three countries.  In India there is a strong trust for ‘local research’ (as opposed to international research), however in Ghana  and Nepal they rate international research much higher than local research.
 
What does this mean to us in the intermediary sector ? In our MK4D programme, we are working on the idea that local intermediaries understand the context of research and policy in their location, and therefore have a strong ground with which to communicate research to policy makers. However, we also work with the idea of ‘co-construction’ working alongside and with our colleagues in the South.   If ‘local research’ is trusted less by policy actors, then that would seem to endorse the approach of co-construction – where local and international bodies work together to provide quality insights.   It also suggests that our programme to support the exposure of research published in the South onto the global internet is heading in the right direction.

Anyway, those are some insights from the first week of analysis.  More to come.

Friday, 7 October 2011

Getting serious about the evidence in policy making

By Nick Perkins
 
See www.impactevaluation2011.org
Earlier this year, the International Initiative for Impact Evaluation - better known as 3ie - convened a conference in Mexico called
Mind the Gap: From Evidence to Policy Impact.

I liked the idea of dedicating 3 days, dozens of presentations and hundreds of blog posts to that little ‘leap of faith’ which characterises so many theories of change about what research can do for development.


The problem that we are faced with is that the normative idea about how policy should be made – based on objective evidence – is seldom the reality that we are faced with - i.e. policy through political expediency. Political expediency is understood to be a range of contextual influences on the decision-making process. When described this way, there is something inevitable about it.

Current thinking is that this expediency can be addressed through mediation of research knowledge. This has given rise to the research mediation sector- institutions and individuals within institutions who seek to frame research in a way that it is accessible and relevant to people working in key policy spheres.

What this reveals is a kind of contradiction at the heart of the development knowledge sector. While we call for evidence-based policy making, there is also increasing investment in the complex process that shapes decision making. A way through this may have been revealed through a closer look at what research mediation actually entails.

A couple years ago, IDS held a series of ‘influencing seminars’ which revealed how different disciplinary communities nuanced their approaches to policy influence depending on how they understood change happened. None of them declared disdain for value of quality evidence. Instead they all expressed differing views of what constitutes ‘quality’ evidence and how to gain traction with those who might need it.

What emerged was a framework of four different ways of building an effective relationship between research and quality policy making.

The first is about generating as many policy options as possible. This emphasises the use of repositories to allow users to sift through the options for themselves.

The second is evidence-based and prioritises the familiar idea that the quality of the research evidence is what will best inform the quality of the decision. Systematic reviews are seen as crucial in the research mediation process here.

Third is the value-led idea of policy-making. There are many examples of this leading to bad science, but it is by far the most common type of public policy making. Networks and epistemic communities are critical to the mediation process in this case.

Finally we have the relational model of influence, which maintains that no amount of research will influence a policymaker if there is not a relationship which reflects equity and a balance of power -where a researcher or a mediator are themselves subject to some influence.

Clearly though, none of these frames are mutually exclusive. Perhaps the point is that we can support the complex reality of policy influence which draws on these without losing sight of the where we ultimately need to get to. In fact using a little political expediency ourselves can go a long way to cross what is too often seen as a small gap.

Thursday, 29 September 2011

Rethinking development in an age of scarcity and uncertainty


By Emilie Wilson

Last week, I was in York attending the European Association of Development Institutes (EADI) and the Development StudiesAssociation (DSA) joint conference, whose thematic focus is the title of this blog.

Downloadable from www.ids.ac.uk
There is something in the word "rethinking" which might suggest an attitude of "it's broken, let's go back to the drawing board and start again". IDS used the term "reimagining", which has a more inspirational and creative resonance to it, when developing research around exploring and responding to crises, the results of which have been captured in the latest IDS Bulletin.

From my experience at the conference, the broad assembly of academics, activists and policymakers reflected a spectrum of  approaches to "rethinking": from the radicals who want to upend the existing financial, governance and informational infrastructure on which development sits, the innovators who modify and improve on existing systems,  the philosophers who want to ask uncomfortable questions about the ethics of giving to poor countries over poor people or the politics of measurement, and the pragmatists who want solutions and answers to the problems they perceive..

The approach I try to bring to “rethinking” is one of learning: reflecting on past and current theories and practices, identifying areas of improvement and opportunities for innovation, and sharing this experience more widely. Hence the blog I wrote for the conference entitled “Let’s not throw out the baby with the bath water”.

As I say in this blog, I found the definition of Knowledge by Dr Sebastiao Mendonco Ferreira fascinating.  He contrasted the practice of managing knowledge as a resource with managing natural resources.  This was a useful way to focus our minds on ‘knowledge’, rather a slippery word, like an intellectual game of charades or a riddle ‘it’s intangible, non-rivalrous, non-erodible, human-made, both tacit and explicit, contained in receptacles such as human minds or embedded in machines, it’s unlimited’….would you have arrived at ‘knowledge’ after this description?

He went on to highlight the role of the internet for Knowledge ecosystems :- Will this increasingly custom-made and intuitive ‘web-environment’ help us develop the epistemic cultures and communities? Sebastiao suggests we need to address our limited ability to ‘absorb’ knowledge? A knowledge which is increasingly complex and sophisticated, and thus difficult to verify? This is one approach that IDS has taken through its development of a social networking platform for people working in development, Eldis Communities.

Read the rest of the original post.

You might also be interested in my colleague Yaso Kunaratnam’s post on: Rethinking the role of intermediaries in bridging policy, research and practice








Wednesday, 14 September 2011

Exploring evaluation approaches: Are there any limits to theories of change?

By Chris Barnett

I’m in Brussels co-facilitating a course on evaluation in conflict-affected countries, with Channel Research. We are exploring new and alternative approaches to evaluation, building on recent experiences of multi-donor evaluations in South Sudan and the Democratic Republic of Congo (DRC). The South Sudan and DRC evaluations are part of a suite of evaluations that sought to test the draft OECD Guidance on Evaluating Conflict Prevention and Peacebuilding Activities.

While the context is very specific, I’m hoping that the discussions will raise some interesting issues around the way we approach evaluation and particularly how we use theories of change. The term “theory of change” is a much overused phrase at the moment, and one that seems to have different meanings to different people. In this case it is being defined as, “the set of beliefs [and assumptions] about how and why an initiative will work to change the conflict” (OECD Guidance, page 35). Duncan Green, in his blog, also helpfully points out the difference between a theory of change (a classic, linear intervention logic, or results chain, used as a basis for logical frameworks) from theories of change (such as a range of theories about the political economy of how and why change occurs).

Photo courtesy of Jon Bennett
Working in conflict-affected states poses many challenges for evaluation, not least the changing context, instability and insecurity. It most cases it is not feasible to set up a controlled experiment and maintain it over a reasonable period of time. Not only are there the cost and ethical issues of distributing benefits randomly, but also the sheer technical difficulty of maintaining a robust counterfactual in a context where there is so much change. It is not impossible of course (e.g. IRC’s evaluation of Community Driven Reconstruction in Liberia); just often not appropriate or feasible.

Hence, the OECD Guidance focuses on a theory-based approach to evaluation (NB: Henry Lucas and Richard Longhurst, IDS Bulletin 41:6, provide a useful overview of different evaluation approaches). At the heart of the OECD Guidance is the need to identify theories of change, against which to evaluate performance.

But in South Sudan and DRC we found a number of limitations to this approach:

1. Firstly, we found it challenging to apply a theory of change approach to the policy or strategic level. Most donors did not articulate a transparent, evidence-based rationale for intervening – sometimes intentionally so, given the dynamic and sensitive context. This meant that reconstructing theories of change for evaluation purposes became highly interpretive and open to being challenged – particularly when drawing out differences between actual and de facto policies.

2. Secondly, we found that different theories of change existed at different levels. As one moved down from the headquarters level to the capital city, and onto local government and field levels, then views differed about the drivers of conflict and the theories of change necessary to address these. This presented the evaluator with a dilemma – and sometimes wrongfully placed as arbiter to different perspectives and realities.

3. Thirdly, while lots of activities contribute to conflict prevention and peacebuilding, many were not be explicit about such objectives. Again, the reconstruction of the de facto theories of change against which to assess performance becomes highly interpretive and more open to being challenged.

So do we hope to do this week? We will be exploring alternatives to such Objective or Goal-Based evaluations that seek to assess performance against the stated (or reconstructed) theories behind an intervention. Rather, we’ll explore some Goal-Free alternatives – where data is gathered to compare outcomes with the actual needs of the target audience, using reality as the reference point rather than a programme theory. After all, in many walks of life we do not “evaluate” performance against the stated objectives: When we assess whether a car is good or not, we do not consider whether the design team fulfilled its objectives! Rather, we are interested in whether it fulfils our needs.