Friday 25 November 2011

Think before you jump (into the social media ocean)

By Emilie Wilson

In the last few months, I’ve been reading and following debates about the use and impact of social media, especially blogging, so where better to share my findings and reflections than in this blog...

First, a spate of recent research and surveys on use and impact of social media in the development sector

The Global Development Network (GDNet) has recently published a review of the use of social media (or not) for research collaboration amongst southern academics. As a network comprising of more than 8,000 researchers worldwide, GDNet should be commended for not jumping onto the social media bandwagon without doing some homework around relevance and appropriateness for its members first.

Findings seem to show that, while there are regional and gender differences, the levels of up-take amongst academics is generally low. Barriers for adoption include poor infrastructure or equipment (still), usability, time and perceived value or credibility of the tools as well as lack of institutional incentives. Sound familiar?



Accessed from: http://pedagogy.cwrl.utexas.edu/
 In the global North, the development sector (at least when it comes to aid agencies, NGOs and think tanks) is well and truly on the social media bandwagon. In the last couple of months, Devex published Top 10 Development Groups on Social Media, Vodafone’s World of Difference charity recommended Top 10 Development blogs, and the Guardian highlighted 20 blogs in its Global Development Blogosphere.

Those engaged in social media seem to range from individuals (activists, aid workers or academics) to small groups with shared interests to institutions who have a clearly articulated ‘social media strategy’.

This has prompted some thinking around sustainability and impact.

Following the closing down of established and popular blogs, such as AidWatch, Duncan Green, who writes the popular From Poverty to Power blog, recently speculated on whether the blogging bubble was about to burst. However, when considering whether to wind down his own blog, Duncan came up with some good reasons to keep it going. Some were personal: “blogging forces you to read stuff more carefully and come to a view”, others aspirational “blogging has turbocharged a part of the development discussion best described as the ‘ideas space’”.

The blogosphere as an epistemic community

Duncan’s personal reflections are corroborated in a recent paper by David McKenzie and Ben Őzler on The Impact of Economic Blogs (PDF). This is a substantive attempt at collecting evidence around the following questions:
1. Do blogs improve dissemination of working papers or journal articles?
2. Do they raise the profile of their creators?
3. Do they cause changes in attitudes among their readers or lead to increased knowledge?

It seems that their evidence shows positive results:
  • Blogging has a significant impact on abstract views and paper downloads
  • Regular blogging is strongly and significantly associated with being more likely to be viewed as a favourite economist
  • A majority of readers have read a new economics paper as a result of a blog posting, and more policy-oriented respondents say that blogs are having an influence on how people feel about the effectiveness of particular policies
This latter finding chimes well with a recent survey conducted by SmartAid, which asked respondents, amongst other questions, why they read [development] blogs.

This was their encouraging response.

Graph accessed from: http://findwhatworks.wordpress.com/2011/09/22/blog-survey-findings-5-why-the-audience-reads-blogs/

A full break-down can be found on Dave Algoso’s blog, Find What Works, and makes for some interesting perusal.

Phew! It’s all a good reason to keep blogging then!

However, in case we bloggers start to take ourselves too seriously, I wanted to share a view on blogging from the McKenzie and Őzler paper which I found amusing, but I hope no one who reads this thinks it applies to Impact and Learning...

“a largely harmless outlet for extrovert cranks and cheap entertainment for procrastinating office workers” (Bell, 2006)

Post-Script: do you write a blog? What do you do to measure influence and impact of your blog?

Tuesday 15 November 2011

Are evaluators too fixated on objectives? The case of conflict-affected states

By Chris Barnett


It is a while since I was co-facilitating a workshop outside Brussels on evaluations in conflict-affected situations. We were exploring alternative approaches to evaluation, building on recent experiences of multi-donor evaluations in South Sudan and the DRC. Previously I’ve raised the issue about whether theories of change have their limitations (see my earlier post on evaluation approaches). In the end, we spent much more time exploring the use and limitations of objectives in evaluation practice. Here is a brief summary of those discussions.

Objectives form the reference point for most mainstream (development) evaluations. Objectives underpin the OECD/DAC evaluation criteria. For instance:

  • Relevance: Are the programme objectives aligned to the situation?
  • Effectiveness: Were the programme objectives met?
In development evaluation,[1] we sometimes come across programmes that are radically re-designed during the mid-term. When this happens, the evaluator is faced with a dilemma: do you measure success against the original objectives, but risk undervaluing a genuinely flexible and responsive programme? Or, do you measure success against the re-designed objectives, but risk excusing an initially poor design?  The former often seems too harsh, the latter too lenient.

In conflict-affected situations, flexibility is a must and re-design occurs more often. Many things cannot be planned in advance, and managers respond to constraints and opportunities as they arise. Yet, being responsive to a dynamic context can undermine plans made three or five years previous. There are also other reasons why clearly planned objectives may be undermined in such fragile contexts: 
  • Terms such as peace and conflict prevention may not be well defined. Indeed, not all stakeholders may share the same understanding within a single programme (e.g. differing civilian and military perspectives). 
  • Peace may not be an explicitly-stated objective. The Utstein palette (PDF) is helpful in this regard, as it unpacks the full range of activities that may contribute to conflict prevention and peace-building. Yet for the evaluator, this all-encompassing view of peace-building activities presents a challenge; with interventions contributing to peace even when this is not stated.
  • Objectives may be politically-sensitive, and may be hidden or masked. For the ex post evaluator, a retrospective interpretation of such objectives opens up the real possibility of contested findings. This is especially so where the ‘evaluation space’ is highly contested, as was the case in the CPPB evaluation in SriLanka (PDF).
So, what provides an alternative point of reference?

Well, the perspectives of local people: their needs, and their understanding of what drives the conflict. The draft OECD Guidance on Evaluating Conflict Prevention and Peacebuilding Activities (currently under revision), provides an entry point for developing such an approach. It layers Conflict Analysis onto the traditional DAC evaluation approach, although it doesn’t fully integrate the two approaches. Instead, a more radical view would be to use the conflict analysis (especially an analysis by local people about what drives the conflict) as a reference point for evaluating the intervention. For instance:

  • Relevance: To what extent does the programme reflect a local understanding of the conflict?
  • Effectiveness: To what extent does the programme address the drivers of the conflict?
  • Impact: To what extent is the programme leading to reduced conflict, by building peace?
Some will argue that this shifts accountability away from programme managers. After all, managers should be accountable for their design, and therefore measured against the objectives they stated. But, this suggested approach is more about making managers accountable to their constituents (their target audience), and focuses on whether they are addressing local needs and factors behind the conflict.


[1] By development evaluation, I am simply referring to evaluations conducted in relatively stable developing contexts, in contrast to those undertaken in fragile or conflict-affected states.

Wednesday 2 November 2011

Exploring the black box together: evaluating the impact of knowledge brokers

Cartoon by Sidney Harris (2007)
By Catherine Fisher

I love this cartoon! 

It seems to capture the idea of the "black box" that lies between the activities knowledge brokers and intermediaries undertake and the outcomes and impacts they seek to achieve. That’s not to say that they don’t achieve outcomes in the real world, rather that the pathways by which their work brings about change are difficult to unpack and evaluate.

The Knowledge Broker’s Forum (KBF) has started exploring this "black box" of how to evaluate the impact of knowledge brokers and intermediaries in an e-discussion running from 31 October until 9 November. I am (lightly) facilitating this discussion, along with Yaso Kunaratnam from IDS Knowledge Services.

If you would like to participate, you can sign up on the forum's website, it's open to anyone with an interest in this area.

Challenges in evaluating impact

We know there are a lot of challenges to evaluating impact of knowledge brokering. Some challenges stem from the processes (psychological, social and political) in which knowledge and information bring about change, the contested nature of the relationship between research and better development results, and the challenges of identifying contribution to any changes in real world contexts. This is particularly challenging for actors that seek to convene, facilitate and connect rather than persuade or influence.

As well as these quite high level challenges, there are the very practical issues around lack of time and resources to dedicate to effectively understanding impact. These challenges are explored in a background paper (PDF) I prepared as food for thought for those taking part in the e-discussion.

Being an e-discussion amongst 400+ knowledge brokers from all over the world, I am not sure yet where discussions will go, but I am hoping that it will shed some light on the following areas:

Breadth and depth of impact and outcomes  

How far do people go to identify ultimate outcomes of knowledge brokering work? I feel we can certainly go beyond immediate impact (e.g. personal learning) to push towards what that resulted in, however I wonder if it is meaningful to start looking at human development and wellbeing indicators. It will be interesting to see how far others are going.

Understanding behaviour change

If knowledge brokering is about behaviour changes that ensure greater engagement with research evidence, how are people defining those behaviour changes and are how are they measuring them? Are we too easily impressed with stories of information use when these could in fact hide some very poor decision-making behaviours?

Opportunities for standardisation of approaches and data collection

If people have come up with ways of doing this, is there any appetite for standardising approaches to enable greater comparison of data between different knowledge brokering initiatives? This would help us build a greater understanding of the contribution of knowledge brokers beyond the scope of any one broker’s evaluation.

I’ll also be interested to explore and challenge some of my assumptions – in particular that building some kind of theory or map of change is an important starting point for defining and then seeking to evaluate impact. This has been discussed previously on this blog and is a hot topic at the moment.

Our discussion will face challenges – not least the huge variety of types of knowledge brokering and contexts in which it is undertaken may mean there is not enough common interest. But I am sure that there is a lot of experience in the group that can be brought to bear on these questions and, in 10 days time, we will have a better idea of what is known, who is keen to explore this further and and hopefully how we could move forward to develop our understanding in this area.