Friday 16 December 2011

How can we make research communications stickier?

Guest Post: 
James Georgalakis, Communications Manager at the Institute of Development Studies (IDS)

James looks back on 2011 and reflects on the most-viewed news stories about IDS research, featured on the IDS website. Here are his thoughts on what made them popular:


Image from: http://serenelyfull.blogspot.com
Which were the "stickiest" stories on the Institute of Development Studies’ (IDS) website in 2011? Is the use of SEO (search engine optimisation) techniques cheating? And do we improve research impact by dropping in the names of royals and celebrities? Just some of the questions I try and answer in this end of year blog.

News or blogs on development research are highly unlikely to compete with viral YouTube sensations involving celebrities or pets that reach millions in hours. But a quick scan through the top ten most viewed news stories from the IDS web site from 2011 still tells us a lot about what it is that makes some research communication super sticky.

In fact, some IDS news stories were so sticky they came top of the 2011 list even though they were not even posted in 2011. For example, a story about Ian Scoones’ book on Zimbabwe’s land reform, posted in November 2010, came in at number one followed closely by Andy Sumner’s New Bottom Billion, also a 2010 story.What did these stories have that others did not? Well of course exciting, original research helps!

Both of these examples tick that box. Scoones’ revelation that Zimbabwe’s land reforms were not so bad after all quickly ignited a lively media debate that has just run and run. It was still running a year on with this BBC Radio 4 documentary looking closely at his claims. Sumner’s startling findings on the changing nature of poverty also ignited debate in the media and academia and stoked up the blogosphere. This is why web traffic just keeps on finding its way back to the original web stories.

However, not all of our top ten scorers from this year were about original research with counterintuitive findings.

Consider our official number one story from 2011 (the most page views of all the stories actually posted in that year). It is news of a special conference celebrating the work and ideas of IDS Research Associate, Robert Chambers, with links to all the related materials.

This is not hard news and it is not particularly surprising or controversial. Instead, what we have here is the awesome stickiness of a big, well stellar, name in the development research community. How often is Chambers’ name googled? How big are the networks of people who have shared the link with one another and will have flocked to access this content? How quickly did news of his conference spread across the blogosphere with links back to the original content?

Robert Chambers was not the only big name to make the IDS top ten. At number two we have Kate Middleton and Prince William who, according to the IDS headline from last April, ‘got engaged in Africa’s land grab hotspot’.

Yes that’s right! We took the convergence of a royal wedding, the happy couple’s obscure connection with land grabs and a recent IDS hosted academic conference on land grabbing to produce something really sticky. Just think how many royal wedding fans inadvertently became informed on the land grab issue. Such are the rewards of working in research communications.

To be fair to the Future Agricultures Consortium, who organised the International Conference on Global Land Grabbing, this is a pretty sticky topic even without royal endorsement. It features in no less than three of our top ten spots from 2011.


The tactic of using highly topical and sticky words or names in your headline is known as search engine optimisation (SEO) or cheating, and it works. When used well, SEO means that big search engines like Google, Yahoo! and Bing will list the relevant link(s) to your website in their first page, increasing the likelihood of people clicking on your links.

Of course topicality is itself one of the greatest assets of all. Just take the Arab uprisings or the Horn of Africa crisis which both, perhaps not surprisingly, made our top ten.

However, as any newspaper sub-editor will tell you: if all else fails you just need a great headline.

What else can explain a story about a podcast from an IDS Sussex Development Lecture coming in at number eight? Don’t get me wrong it was a really great (and packed out) lecture but a story about a podcast!

It was titled: ‘Decline of the NGO Empire – where next for international development organisations?’ Pretty good huh? Yes, it was one of mine. Also very nice, (and not one of mine) and slipping in at number ten is: ‘Taking the scare out of scarcity - how faulty economic models keep the poor poor’. See what we did there?

Now I know the whole subject of web traffic drivers and SEO is way more complicated than this.

Our study of the IDS top ten has severe limitations. Clearly content published early in the year has an advantage over the stories that came after and some of our high scorers were boosted by extended periods on the IDS home page. Plus it is about push as well as pull and for that we have to start thinking about the role of social media, where content is positioned on the site and a multitude of other factors.

This is another blog for another day, but my point is this: if you want lots of people to find the story about your research irresistible consider this 2011 IDS top ten. Methodologically sound and original research is great, but a crowd pulling name, timeliness, a big surprise and a great title all help a lot too.

If you want to know more about this stickiness business I suggest you read Made to Stick: Why Some Ideas Survive and Others Die, by Chip and Dan Heath which should be a compulsory text for all those working in research communications.

Happy holidays!
James Georgalakis, Communications Manager, IDS

And here is that 2011 IDS website news stories top 10 in full (listed by order of unique page views to date):

1. Revolutions in development reflecting forwards from the work of Robert Chambers
2. Prince William and Kate Middleton engaged in Africa’s land grab hotspot
3. Bellagio Initiative starts an IDS led global debate exploring the future of philanthropy and international development
4. Debating the global land grab
5. How a citizen led approach can transform aid to governance
6. The East African food crisis beyond drought and food aid
7. Experts warn of new scramble for Africa at an international conference on land grabbing
8. Decline of the ngo empire – where next for international development organisations?
9. The people revolt why we got it wrong for the Arab world
10. Taking the scare out of scarcity - how faulty economic models keep the poor poor

Friday 25 November 2011

Think before you jump (into the social media ocean)

By Emilie Wilson

In the last few months, I’ve been reading and following debates about the use and impact of social media, especially blogging, so where better to share my findings and reflections than in this blog...

First, a spate of recent research and surveys on use and impact of social media in the development sector

The Global Development Network (GDNet) has recently published a review of the use of social media (or not) for research collaboration amongst southern academics. As a network comprising of more than 8,000 researchers worldwide, GDNet should be commended for not jumping onto the social media bandwagon without doing some homework around relevance and appropriateness for its members first.

Findings seem to show that, while there are regional and gender differences, the levels of up-take amongst academics is generally low. Barriers for adoption include poor infrastructure or equipment (still), usability, time and perceived value or credibility of the tools as well as lack of institutional incentives. Sound familiar?



Accessed from: http://pedagogy.cwrl.utexas.edu/
 In the global North, the development sector (at least when it comes to aid agencies, NGOs and think tanks) is well and truly on the social media bandwagon. In the last couple of months, Devex published Top 10 Development Groups on Social Media, Vodafone’s World of Difference charity recommended Top 10 Development blogs, and the Guardian highlighted 20 blogs in its Global Development Blogosphere.

Those engaged in social media seem to range from individuals (activists, aid workers or academics) to small groups with shared interests to institutions who have a clearly articulated ‘social media strategy’.

This has prompted some thinking around sustainability and impact.

Following the closing down of established and popular blogs, such as AidWatch, Duncan Green, who writes the popular From Poverty to Power blog, recently speculated on whether the blogging bubble was about to burst. However, when considering whether to wind down his own blog, Duncan came up with some good reasons to keep it going. Some were personal: “blogging forces you to read stuff more carefully and come to a view”, others aspirational “blogging has turbocharged a part of the development discussion best described as the ‘ideas space’”.

The blogosphere as an epistemic community

Duncan’s personal reflections are corroborated in a recent paper by David McKenzie and Ben Őzler on The Impact of Economic Blogs (PDF). This is a substantive attempt at collecting evidence around the following questions:
1. Do blogs improve dissemination of working papers or journal articles?
2. Do they raise the profile of their creators?
3. Do they cause changes in attitudes among their readers or lead to increased knowledge?

It seems that their evidence shows positive results:
  • Blogging has a significant impact on abstract views and paper downloads
  • Regular blogging is strongly and significantly associated with being more likely to be viewed as a favourite economist
  • A majority of readers have read a new economics paper as a result of a blog posting, and more policy-oriented respondents say that blogs are having an influence on how people feel about the effectiveness of particular policies
This latter finding chimes well with a recent survey conducted by SmartAid, which asked respondents, amongst other questions, why they read [development] blogs.

This was their encouraging response.

Graph accessed from: http://findwhatworks.wordpress.com/2011/09/22/blog-survey-findings-5-why-the-audience-reads-blogs/

A full break-down can be found on Dave Algoso’s blog, Find What Works, and makes for some interesting perusal.

Phew! It’s all a good reason to keep blogging then!

However, in case we bloggers start to take ourselves too seriously, I wanted to share a view on blogging from the McKenzie and Őzler paper which I found amusing, but I hope no one who reads this thinks it applies to Impact and Learning...

“a largely harmless outlet for extrovert cranks and cheap entertainment for procrastinating office workers” (Bell, 2006)

Post-Script: do you write a blog? What do you do to measure influence and impact of your blog?

Tuesday 15 November 2011

Are evaluators too fixated on objectives? The case of conflict-affected states

By Chris Barnett


It is a while since I was co-facilitating a workshop outside Brussels on evaluations in conflict-affected situations. We were exploring alternative approaches to evaluation, building on recent experiences of multi-donor evaluations in South Sudan and the DRC. Previously I’ve raised the issue about whether theories of change have their limitations (see my earlier post on evaluation approaches). In the end, we spent much more time exploring the use and limitations of objectives in evaluation practice. Here is a brief summary of those discussions.

Objectives form the reference point for most mainstream (development) evaluations. Objectives underpin the OECD/DAC evaluation criteria. For instance:

  • Relevance: Are the programme objectives aligned to the situation?
  • Effectiveness: Were the programme objectives met?
In development evaluation,[1] we sometimes come across programmes that are radically re-designed during the mid-term. When this happens, the evaluator is faced with a dilemma: do you measure success against the original objectives, but risk undervaluing a genuinely flexible and responsive programme? Or, do you measure success against the re-designed objectives, but risk excusing an initially poor design?  The former often seems too harsh, the latter too lenient.

In conflict-affected situations, flexibility is a must and re-design occurs more often. Many things cannot be planned in advance, and managers respond to constraints and opportunities as they arise. Yet, being responsive to a dynamic context can undermine plans made three or five years previous. There are also other reasons why clearly planned objectives may be undermined in such fragile contexts: 
  • Terms such as peace and conflict prevention may not be well defined. Indeed, not all stakeholders may share the same understanding within a single programme (e.g. differing civilian and military perspectives). 
  • Peace may not be an explicitly-stated objective. The Utstein palette (PDF) is helpful in this regard, as it unpacks the full range of activities that may contribute to conflict prevention and peace-building. Yet for the evaluator, this all-encompassing view of peace-building activities presents a challenge; with interventions contributing to peace even when this is not stated.
  • Objectives may be politically-sensitive, and may be hidden or masked. For the ex post evaluator, a retrospective interpretation of such objectives opens up the real possibility of contested findings. This is especially so where the ‘evaluation space’ is highly contested, as was the case in the CPPB evaluation in SriLanka (PDF).
So, what provides an alternative point of reference?

Well, the perspectives of local people: their needs, and their understanding of what drives the conflict. The draft OECD Guidance on Evaluating Conflict Prevention and Peacebuilding Activities (currently under revision), provides an entry point for developing such an approach. It layers Conflict Analysis onto the traditional DAC evaluation approach, although it doesn’t fully integrate the two approaches. Instead, a more radical view would be to use the conflict analysis (especially an analysis by local people about what drives the conflict) as a reference point for evaluating the intervention. For instance:

  • Relevance: To what extent does the programme reflect a local understanding of the conflict?
  • Effectiveness: To what extent does the programme address the drivers of the conflict?
  • Impact: To what extent is the programme leading to reduced conflict, by building peace?
Some will argue that this shifts accountability away from programme managers. After all, managers should be accountable for their design, and therefore measured against the objectives they stated. But, this suggested approach is more about making managers accountable to their constituents (their target audience), and focuses on whether they are addressing local needs and factors behind the conflict.


[1] By development evaluation, I am simply referring to evaluations conducted in relatively stable developing contexts, in contrast to those undertaken in fragile or conflict-affected states.

Wednesday 2 November 2011

Exploring the black box together: evaluating the impact of knowledge brokers

Cartoon by Sidney Harris (2007)
By Catherine Fisher

I love this cartoon! 

It seems to capture the idea of the "black box" that lies between the activities knowledge brokers and intermediaries undertake and the outcomes and impacts they seek to achieve. That’s not to say that they don’t achieve outcomes in the real world, rather that the pathways by which their work brings about change are difficult to unpack and evaluate.

The Knowledge Broker’s Forum (KBF) has started exploring this "black box" of how to evaluate the impact of knowledge brokers and intermediaries in an e-discussion running from 31 October until 9 November. I am (lightly) facilitating this discussion, along with Yaso Kunaratnam from IDS Knowledge Services.

If you would like to participate, you can sign up on the forum's website, it's open to anyone with an interest in this area.

Challenges in evaluating impact

We know there are a lot of challenges to evaluating impact of knowledge brokering. Some challenges stem from the processes (psychological, social and political) in which knowledge and information bring about change, the contested nature of the relationship between research and better development results, and the challenges of identifying contribution to any changes in real world contexts. This is particularly challenging for actors that seek to convene, facilitate and connect rather than persuade or influence.

As well as these quite high level challenges, there are the very practical issues around lack of time and resources to dedicate to effectively understanding impact. These challenges are explored in a background paper (PDF) I prepared as food for thought for those taking part in the e-discussion.

Being an e-discussion amongst 400+ knowledge brokers from all over the world, I am not sure yet where discussions will go, but I am hoping that it will shed some light on the following areas:

Breadth and depth of impact and outcomes  

How far do people go to identify ultimate outcomes of knowledge brokering work? I feel we can certainly go beyond immediate impact (e.g. personal learning) to push towards what that resulted in, however I wonder if it is meaningful to start looking at human development and wellbeing indicators. It will be interesting to see how far others are going.

Understanding behaviour change

If knowledge brokering is about behaviour changes that ensure greater engagement with research evidence, how are people defining those behaviour changes and are how are they measuring them? Are we too easily impressed with stories of information use when these could in fact hide some very poor decision-making behaviours?

Opportunities for standardisation of approaches and data collection

If people have come up with ways of doing this, is there any appetite for standardising approaches to enable greater comparison of data between different knowledge brokering initiatives? This would help us build a greater understanding of the contribution of knowledge brokers beyond the scope of any one broker’s evaluation.

I’ll also be interested to explore and challenge some of my assumptions – in particular that building some kind of theory or map of change is an important starting point for defining and then seeking to evaluate impact. This has been discussed previously on this blog and is a hot topic at the moment.

Our discussion will face challenges – not least the huge variety of types of knowledge brokering and contexts in which it is undertaken may mean there is not enough common interest. But I am sure that there is a lot of experience in the group that can be brought to bear on these questions and, in 10 days time, we will have a better idea of what is known, who is keen to explore this further and and hopefully how we could move forward to develop our understanding in this area.

Monday 24 October 2011

Beyond happy sheets: outcome-focused event evaluation

By Penelope Beynon

Since joining the knowledge for development sector in June last year, I have participated in no less than 2 international conferences, 3 regional workshops and a host of cross-organisational meetings (and sent apologies for three times as many of each). Some cost money (for international or intercity travel), all have opportunity costs (being here instead of there) and they all cost time. 

As a participant, I find there is something innately attractive and energising about being together in a room with experts and peers that just cannot be simulated through online alternatives; but as a taxpayer I can’t quite shake that uncomfortable question – was it worth it?

In my role as M&E advisor I am occasionally asked how to evaluate events – while I haven’t yet found a tried and tested method that fits every event, I thought I’d share a few things I have learnt along the way.

With a few notable exceptions (e.g. A Process and Outcomes Evaluation of the International AIDS Conference, Lalonde et al 2007), most organisers fail to evaluate their events beyond a cursory feedback form that gauges audience satisfaction (commonly referred to as a ‘happy sheet’). But, if an organiser did want to push their evaluation to a new level and address the ‘uncomfortable’ question of worth – where would they begin?

In its most simplistic form, I propose that a worthwhile event evaluation needs to gather three types of information:
  • Costs 
  • Outcomes 
  • Reasonable alternatives

The full financial cost of events is rarely included in evaluation

The table below shows a summary of some areas where events incur costs. Unsurprisingly few organisers publish even the full financial costs of their events (grey box) or even add up their own financial and time costs (grey + purple boxes) for purposes of evaluation, let alone start to consider the sectoral costs of their event to participants and contributors.



Focusing on desired outcomes
Learning events may benefit all of these groups (P. Beynon, IDS)

1.    Spread your net wide when looking for outcomes

A common short coming of most outcome-focused event evaluations that I have unearthed (of which there are few to begin with) is a narrow concept of where benefits will occur and an almost exclusive focus on participants as the subjects for evaluation. Just as there are at least three groups who can incur costs for an event, these same groups could feasibly incur benefits (see diagram).



2.    Tailor your evaluation tools to match desired outcomes

Like all interventions, face-to-face events do not happen in isolation, they are usually part of a wider set of strategies intended (implicitly or explicitly) to contribute in some way to a programme's overall theory of change. Unfortunately, more often than not this link is not properly explored and event objectives read like either a) a less-than-ambitious list of activities, or b) an overly ambitious set of development aspirations well beyond anything the event could possibly deliver. Work closely with organisers to get to flesh out their theory of change and to situate the conference objectives within the wider programme context - then you will be able to tailor your evaluation tools to match the desired outcomes. While some organisers are coming up with interesting tools and approaches for outcome-focused event evaluation (e.g. network mapping (PDF), 3-test self-assessment) which I explore along with a few of our own attempts in a forthcoming ILT Practice In-Brief paper, most still limit their data sources to attendance records and the standard ‘happy sheet’.

3.    Follow through on your follow up!

The biggest limitation for most event evaluations is a lack of meaningful follow up. Change takes time, and unless you follow up with participants when they are back in their workplace you will only be able to capture intended behaviour change or the initial step towards an extended network. Be disciplined – schedule event follow ups for 3, 6 even 12 months after the fact.

Is there a cheaper way to achieve the same outcomes?
Well, this really is the million dollar question, and without a clear picture of our costs and benefits it just cannot be answered. But when you do have this level of information for one event, you will be able to start comparing that event with another and maybe even progress on to comparing all your face to face events with other strategies that use different tactics to achieve similar aims: such as ongoing rather than one-off events; online rather than face to face convening; 1 to 1 rather than convened events...

To conclude
As the saying goes, “If it’s worth doing at all it is worth doing properly” - so I urge organisers to go beyond ‘happy sheets’ and really scrutinise the worth of their events for their own sake and for the sake of the sector.

Friday 14 October 2011

Early headlines from research on policy makers and ICTs: "persistent and curious enquirers" (with smartphones)

By Simon Batchelor

Just to keep you up to date on the country studies that I mentioned in my first blog….(in which I spoke about research we were conducting on policy makers and their use of ICTs).... a lot of data is in. Some countries found it easier to get interviews with senior policy makers than others, so some countries have still to deliver their full quota.

However we have now begun analysis and we begin to find some interesting headlines. As I write, my colleagues Jon Gregson and Manu Tyagi are presenting some headlines back to a portion of the intermediary sector in India and Nepal, and Chris Barnett presented last week in Ghana.  I would like to acknowledge the work of our partners ODC Inc in Nepal, and Delink Services  in Ghana.

So what are some of those headlines?

We will upload the slideshare soon, but in brief here are some of the things that attracted my attention:-

Policy actors have access to ICT, and a considerable number of them have smartphones, and to my mind more importantly, know how to use them!

Image from: http://bestsmartphone2011.info/

Of course they almost all have access to computers and the internet, and cellphones.  But Ghana 52%, Nepal 49% and North India 35% of the samples have smartphones.  In Ghana 25% had more than one smartphone!  And of those that have a smartphone, almost all in Ghana and Nepal have explored sending emails, surfing in the internet on the phone, recording a video and instant messaging.  Only in North India did it seem that there were a significant portion of people who had a smartphone and yet did not explore these ‘features’ (about 50%).

What does this mean to us in the intermediary sector? It suggests that if you are developing an app to push research into the policy environment, then the baseline of smartphone use is there.

Policy actors are surfing the internet themselves – the idea that policy makers wait for an assistant to brief them seems to be diminishing.  

In all three countries, the majority of policy makers agreed with statements surrounding their own use of ICT and surfing the internet.  They described themselves as ‘a "persistent and curious" enquirer’ and noted that they ‘often "discover" other relevant information when searching’ (Phrases used by the PEW Internet studies in USA).  They also agreed to a lesser extent with ‘I tend to get my briefings face to face officially, in meetings’.  In Ghana, where there was a significant portion of private sector executives, there were a significant number who actually disagreed with the idea that they got their information from ‘official briefings’.

What does this mean to us in the intermediary sector? It suggests that policy actors are looking for information themselves, and, I presume, therefore need to find it easily, in an accessible form, and I guess, quickly.  

Yes, I know that searching for information online is evolving, and that social networks now tend to push information within the network.  This changes the way those of us who are well connected get our information.  We did investigate whether the policy actors are connected to social media networks and to some extent looked at their searching behaviours, but we are not there yet in the analysis to be able to comment on it.  Watch this space.

Policy actors do have an appetite for research – or at least they say they do 

There was a consistent strong agreement with the need for facts and figures, and that these need to be up to date.  We explored what information they were actually looking for and we looked at whether they trust the sources and channels for the information.  Again, these details will come out as the analysis proceeds.  However there was an interesting difference between the three countries.  In India there is a strong trust for ‘local research’ (as opposed to international research), however in Ghana  and Nepal they rate international research much higher than local research.
 
What does this mean to us in the intermediary sector ? In our MK4D programme, we are working on the idea that local intermediaries understand the context of research and policy in their location, and therefore have a strong ground with which to communicate research to policy makers. However, we also work with the idea of ‘co-construction’ working alongside and with our colleagues in the South.   If ‘local research’ is trusted less by policy actors, then that would seem to endorse the approach of co-construction – where local and international bodies work together to provide quality insights.   It also suggests that our programme to support the exposure of research published in the South onto the global internet is heading in the right direction.

Anyway, those are some insights from the first week of analysis.  More to come.

Friday 7 October 2011

Getting serious about the evidence in policy making

By Nick Perkins
 
See www.impactevaluation2011.org
Earlier this year, the International Initiative for Impact Evaluation - better known as 3ie - convened a conference in Mexico called
Mind the Gap: From Evidence to Policy Impact.

I liked the idea of dedicating 3 days, dozens of presentations and hundreds of blog posts to that little ‘leap of faith’ which characterises so many theories of change about what research can do for development.


The problem that we are faced with is that the normative idea about how policy should be made – based on objective evidence – is seldom the reality that we are faced with - i.e. policy through political expediency. Political expediency is understood to be a range of contextual influences on the decision-making process. When described this way, there is something inevitable about it.

Current thinking is that this expediency can be addressed through mediation of research knowledge. This has given rise to the research mediation sector- institutions and individuals within institutions who seek to frame research in a way that it is accessible and relevant to people working in key policy spheres.

What this reveals is a kind of contradiction at the heart of the development knowledge sector. While we call for evidence-based policy making, there is also increasing investment in the complex process that shapes decision making. A way through this may have been revealed through a closer look at what research mediation actually entails.

A couple years ago, IDS held a series of ‘influencing seminars’ which revealed how different disciplinary communities nuanced their approaches to policy influence depending on how they understood change happened. None of them declared disdain for value of quality evidence. Instead they all expressed differing views of what constitutes ‘quality’ evidence and how to gain traction with those who might need it.

What emerged was a framework of four different ways of building an effective relationship between research and quality policy making.

The first is about generating as many policy options as possible. This emphasises the use of repositories to allow users to sift through the options for themselves.

The second is evidence-based and prioritises the familiar idea that the quality of the research evidence is what will best inform the quality of the decision. Systematic reviews are seen as crucial in the research mediation process here.

Third is the value-led idea of policy-making. There are many examples of this leading to bad science, but it is by far the most common type of public policy making. Networks and epistemic communities are critical to the mediation process in this case.

Finally we have the relational model of influence, which maintains that no amount of research will influence a policymaker if there is not a relationship which reflects equity and a balance of power -where a researcher or a mediator are themselves subject to some influence.

Clearly though, none of these frames are mutually exclusive. Perhaps the point is that we can support the complex reality of policy influence which draws on these without losing sight of the where we ultimately need to get to. In fact using a little political expediency ourselves can go a long way to cross what is too often seen as a small gap.

Thursday 29 September 2011

Rethinking development in an age of scarcity and uncertainty


By Emilie Wilson

Last week, I was in York attending the European Association of Development Institutes (EADI) and the Development StudiesAssociation (DSA) joint conference, whose thematic focus is the title of this blog.

Downloadable from www.ids.ac.uk
There is something in the word "rethinking" which might suggest an attitude of "it's broken, let's go back to the drawing board and start again". IDS used the term "reimagining", which has a more inspirational and creative resonance to it, when developing research around exploring and responding to crises, the results of which have been captured in the latest IDS Bulletin.

From my experience at the conference, the broad assembly of academics, activists and policymakers reflected a spectrum of  approaches to "rethinking": from the radicals who want to upend the existing financial, governance and informational infrastructure on which development sits, the innovators who modify and improve on existing systems,  the philosophers who want to ask uncomfortable questions about the ethics of giving to poor countries over poor people or the politics of measurement, and the pragmatists who want solutions and answers to the problems they perceive..

The approach I try to bring to “rethinking” is one of learning: reflecting on past and current theories and practices, identifying areas of improvement and opportunities for innovation, and sharing this experience more widely. Hence the blog I wrote for the conference entitled “Let’s not throw out the baby with the bath water”.

As I say in this blog, I found the definition of Knowledge by Dr Sebastiao Mendonco Ferreira fascinating.  He contrasted the practice of managing knowledge as a resource with managing natural resources.  This was a useful way to focus our minds on ‘knowledge’, rather a slippery word, like an intellectual game of charades or a riddle ‘it’s intangible, non-rivalrous, non-erodible, human-made, both tacit and explicit, contained in receptacles such as human minds or embedded in machines, it’s unlimited’….would you have arrived at ‘knowledge’ after this description?

He went on to highlight the role of the internet for Knowledge ecosystems :- Will this increasingly custom-made and intuitive ‘web-environment’ help us develop the epistemic cultures and communities? Sebastiao suggests we need to address our limited ability to ‘absorb’ knowledge? A knowledge which is increasingly complex and sophisticated, and thus difficult to verify? This is one approach that IDS has taken through its development of a social networking platform for people working in development, Eldis Communities.

Read the rest of the original post.

You might also be interested in my colleague Yaso Kunaratnam’s post on: Rethinking the role of intermediaries in bridging policy, research and practice








Wednesday 14 September 2011

Exploring evaluation approaches: Are there any limits to theories of change?

By Chris Barnett

I’m in Brussels co-facilitating a course on evaluation in conflict-affected countries, with Channel Research. We are exploring new and alternative approaches to evaluation, building on recent experiences of multi-donor evaluations in South Sudan and the Democratic Republic of Congo (DRC). The South Sudan and DRC evaluations are part of a suite of evaluations that sought to test the draft OECD Guidance on Evaluating Conflict Prevention and Peacebuilding Activities.

While the context is very specific, I’m hoping that the discussions will raise some interesting issues around the way we approach evaluation and particularly how we use theories of change. The term “theory of change” is a much overused phrase at the moment, and one that seems to have different meanings to different people. In this case it is being defined as, “the set of beliefs [and assumptions] about how and why an initiative will work to change the conflict” (OECD Guidance, page 35). Duncan Green, in his blog, also helpfully points out the difference between a theory of change (a classic, linear intervention logic, or results chain, used as a basis for logical frameworks) from theories of change (such as a range of theories about the political economy of how and why change occurs).

Photo courtesy of Jon Bennett
Working in conflict-affected states poses many challenges for evaluation, not least the changing context, instability and insecurity. It most cases it is not feasible to set up a controlled experiment and maintain it over a reasonable period of time. Not only are there the cost and ethical issues of distributing benefits randomly, but also the sheer technical difficulty of maintaining a robust counterfactual in a context where there is so much change. It is not impossible of course (e.g. IRC’s evaluation of Community Driven Reconstruction in Liberia); just often not appropriate or feasible.

Hence, the OECD Guidance focuses on a theory-based approach to evaluation (NB: Henry Lucas and Richard Longhurst, IDS Bulletin 41:6, provide a useful overview of different evaluation approaches). At the heart of the OECD Guidance is the need to identify theories of change, against which to evaluate performance.

But in South Sudan and DRC we found a number of limitations to this approach:

1. Firstly, we found it challenging to apply a theory of change approach to the policy or strategic level. Most donors did not articulate a transparent, evidence-based rationale for intervening – sometimes intentionally so, given the dynamic and sensitive context. This meant that reconstructing theories of change for evaluation purposes became highly interpretive and open to being challenged – particularly when drawing out differences between actual and de facto policies.

2. Secondly, we found that different theories of change existed at different levels. As one moved down from the headquarters level to the capital city, and onto local government and field levels, then views differed about the drivers of conflict and the theories of change necessary to address these. This presented the evaluator with a dilemma – and sometimes wrongfully placed as arbiter to different perspectives and realities.

3. Thirdly, while lots of activities contribute to conflict prevention and peacebuilding, many were not be explicit about such objectives. Again, the reconstruction of the de facto theories of change against which to assess performance becomes highly interpretive and more open to being challenged.

So do we hope to do this week? We will be exploring alternatives to such Objective or Goal-Based evaluations that seek to assess performance against the stated (or reconstructed) theories behind an intervention. Rather, we’ll explore some Goal-Free alternatives – where data is gathered to compare outcomes with the actual needs of the target audience, using reality as the reference point rather than a programme theory. After all, in many walks of life we do not “evaluate” performance against the stated objectives: When we assess whether a car is good or not, we do not consider whether the design team fulfilled its objectives! Rather, we are interested in whether it fulfils our needs.

Thursday 8 September 2011

Consuming knowledge or constructing it: a response

By Catherine Fisher

In my colleague Sunder’s blog earlier this week entitled Consuming knowledge or constructing it: evidence from the field , he described the resistance that some of the civil servants that he had met in India felt towards engaging with research knowledge. 

Reading the post, I felt that the reasons given by the civil servants he met for not wanting to engage with research knowledge are extremely valid and justified, based on their understanding of research and  “research uptake” or “evidence based policy” agendas.  
The  reasons given for not engaging with research  reveal a set of assumptions about the nature of research knowledge and knowledge more broadly that stem from modernist ideas about knowledge that are explored in the paper  that my colleague Emilie discussed in the first of this series of blogs.

Entitled Changing conceptions of intermediaries: challenging the modernist view of knowledge, communication and social change, one section of the paper explores four key modernist assumptions about knowledge which it goes onto critique:  that there exists an objective reality, that the scientific method of enquiry is neutral, that knowledge can be stored, managed and transferred, and that communication is a linear process.   These seemed to chime with me when I thought about the assumptions that seemed to inform the responses of the civil servants Sunder interviewed.   While these ideas have been widely critiqued, lots of the assumptions remain, both for people involved in the world of research uptake, and on Sunder’s evidence,  in the minds of the people whom they are trying to influence.  

I think the civil servants seemed to be rejecting a set of assumptions about research and its role in decision making, which I paraphrase as:

1. Research knowledge is the only knowledge that is important

This idea has been championed since the enlightenment and contains ideas about how legitimate knowledge is produced: it privileges knowledge produced through certain scientific processes and marginalises other sources of knowledge eg that generated through experience, or from less powerful actors.  While the underlying tenets have merit and underpin movements such as “evidence based policy”,  few proponents of greater use of evidence in decision making would argue that decisions should be solely based on research based evidence with no regard of other kinds of knowledge or social, economic or political contexts.   However maybe this is what is conveyed!

2.  Research provides a pre-determined right answer
There are very few policy contexts in which there is a clear cut right answer, particularly in relation to social, economic and political issues, and so social science research is very rarely going to be able to provide a right answer.  As intermediaries, we are concerned with engagement with a multiple range of sources from which an answer can be constructed.  And again I doubt many people involved in research uptake would see that one piece of research will provide a policy answer

However, as Carol Weiss in the introduction to Fred Carden’s recent book “Knowledge to Policy: Making the most of Development Research  observes, for many policy makers in developing countries, their experience of research is that which accompanies policy prescriptions handed out by funding institutions, thus the idea that research promotes a correct one-size fits all approach could well have been reinforced by this experience.


3. Knowledge can be communicated in a linear way and consumed passively

Personally, I believe that knowledge can only be constructed in someone’s head, knowledge requires a “knower” and is the sense someone make of information when they analyse and understand it according to their previous experience, belief systems and even what mood they are in today.  Thus I would argue with the title of the post but its been pointed out that my position may be a little extreme and not necessarily shared by many in the industry in which I work!  However I think that everyone who works in the knowledge industry needs to dispel the myth that knowledge can be transferred or passively consumed in ways that bear no reference to the consumer – people have knowledge and this shapes how they interpret new knowledge (research knowledge or otherwise)  and what they do as a result. 

So I would conclude that resistance to use of evidence in decision making is entirely reasonable if you have a modernist understanding of research knowledge.  Or indeed if you have been subject to such ideas from others/ believe that the person you are talking to shares those ideas!  

For me 2 areas emerge:

To what extent are the people who promote research uptake/evidence based policy etc propagating ideas about research and its role in change that are not useful?   Do those of us involved in promoting greater use of research in decision making need to be clearer with ourselves how we see the role of research vis a vis other knowledge and in the context of political realities and consider what messages we are sending. 

I am increasingly thinking it would be valuable if people generally (including me) were more aware of their own decision making processes - a kind of meta-cognition that allows them to be more aware of the how they process information and ideas, accepting, rejecting and interpreting  and applying them in line with their previous experiences, knowledge and beliefs.  In keeping line with the work of IDS colleagues in the Participation Power and Social Change Team, I suspect that this can be encouraged through greater reflective practice .   Another angle on this issue is idea of “evidence literacy” I am working with INASP to explore this through its PERii: Evidence-Informed Policy Making programme and will be posting updates to this blog. Watch this space.