Showing posts with label Learning. Show all posts
Showing posts with label Learning. Show all posts

Friday, 8 November 2013

Looking for a tool to analyse and 'compare' policies? Check our our lessons from conducting a QDA

By Elise Wach

As part of our team efforts to maintain a reflective practice and share learning to others, one of our latest ‘Practice Papers in Brief’ provides some insights from conducting a Qualitative Document Analysis (QDA) on policy documents for the rural water sector.

The QDA was undertaken as part of the Triple-S (Sustainable Services at Scale) initiative, for which the Impact and Learning Team (ILT) at IDS facilitates learning. 

Qualitative Document Analysis (QDA) is a research method for systematically analysing the contents of written documents.  The approach is used in political science research to facilitate impartial and consistent analysis of written policies. 

Given that Triple-S is aiming to change policies and practices in the rural water sector, the initiative decided to undertake a QDA on policy documents at the international level in order to understand trends and progress in the sector and also to engage development partners in identifying possible changes to policies and practices to move the sector closer to achieving ‘sustainable services at scale.’  Later, we decided to expand this to ‘practice’ documents as well. 

Consistent with Triple S’s ‘theory of change’, generating discussion on these issues and catalysing change was just as much of a priority as generating reliable evidence about policy trends.

In the paper, we discuss the strengths and weaknesses of the methodology, and provide some pointers that might be helpful if it is a tool you might consider using.

Overall, we found that the QDA exercise provided useful information about trends and gaps in the rural water sector, helped to refine the Triple-S engagement strategy, and served as a useful platform for engagement with partner organisations.

Some of our lessons related to issues of defining our 'themes' and scoring, inclusion criteria for documents, unclear or zero scoring, and the relationships between the research team and the organisations included in the review.

Next week, Triple-S will be kicking off another QDA for the Ghana Workstream, to analyse government rural water policies, and will incorporate many of the lessons that we’ve learned on QDA so far.  We’ll also be conducting another round of QDA at the international level next year to analyse the ways in which the rural water sector policies have shifted over the course of the Triple-S project, and to understand what to focus on moving forward.

Elise Wach is Monitoring, Evaluation and Learning Adviser with the Impact and Learning Team at the Institute of Development Studies

Other blogs by Elise on Impact and Learning: 

Wednesday, 28 November 2012

Comparing research and oranges II: do communities want oranges or flowers?

By Simon Batchelor

In her blog, Comparing research and oranges: what can we learn from value chain analysis?, my colleague Elise Wach asks whether “producing research first and then deciding how to communicate it afterwards the same as growing an orange and then deciding how and where it will be sold?” She went on to speculate whether value chain analysis can add something to our own analyses of how to strengthen the knowledge value chain.

Her piece reminded me of a video we used at a team retreats, entitled Whose Reality Counts. Produced by Praxis, based in India, it also caused us to wonder about the comparison with research production, and the processes of setting a research agenda.


In case you don’t have the time to watch the 7 minute video (but please do – it's so well done!) here's a quick synopsis:

A senior office-based person sits and has a bright idea: giving flowers to a poor community. The idea is passed down the decision-making chain to farmers in the poor community, who, initially pleased, begin to plant flowers.

However, still sat in his office, the official continues to pass flowers down the line, and we see the farmer becoming frustrated with too many flowers and not enough diversity of food, which is what the community really wants. The community cannot make their voices heard, until the official goes to the community himself expecting to see grateful villagers and a thousand flowers. That’s not what he finds on his arrival, and it's only after listening the villagers and their needs that he gains an understanding what they really need - their reality as described by them.

For me, the parallels are obvious. Research conducted in isolation from the realities in the field may produce insights, and these initial insights may even be appreciated by the community. However, communities have priorities and there needs to be a feedback loop to find out what those priorities are and whether our research needs to be redirected.

As Elise says in her blog “When is audience research necessary, and when does the ‘if we build it, they will come’ assumption apply? Where is the line between research communication and advocacy? How can we create demand and to what extent should we do so?

So, whose responsibility is it to set the research agenda?

In a recent review of plans from leading research centres, we had to ask ‘where are the boundaries for a researcher’. If the research centres are intending to change the world in some way (their stated intention) then there needs to be engagement with the outside world during the research.

We ended up noting two type of engagement:
  • ‘A need to engage with a representative sample of the end users to ensure that new hybrids or practices fit the ‘real world’ farming systems’. 
  • And ‘there are the actors at the boundary of the research who might take the research forward. At some point, research that has led to successful product development will need to go to scale’.
Isolated research may change the world slightly, but may also rapidly become too many flowers when the community needs food. However you frame it – as Value chains with Customer feedback and monitoring market demand, or as participatory development with consultation and ‘mainstreaming the voices of the poor’, research that changes the world is going to require tight feedback loops and a view that is much wider than an agenda set by sitting in an office.

Wednesday, 26 September 2012

Laying the foundations of a knowledge sharing network

By Catherine Fisher

AfricaAdapt is a knowledge sharing network on climate change adaptation in Africa. My colleague Blane Harvey and I recently published a paper that shares insights from its first phase of operation.   Entitled  “Behind the Scenes at a ClimateChange Knowledge Sharing Network: IDS Insights from Phase One of AfricaAdapt”, it explores the dynamics of design and implementation of a knowledge sharing network in a distributed partnership, from the perspective of the former lead partner.

The paper identifies insights across a range of areas, from governance and managing financial resources to capacity building and learning that we hope will be useful to others thinking of setting up a similar knoweldge sharing network.   Here I focus on one theme that emerged  around building mutual understandings and the importance of inception and set up meetings.  I also share a few practical ideas that didn't make it into the paper. 

Importance of exploring and constructing meaning

One theme that emerged in that paper is the importance of establishing understandings that will underpin effective collaboration at the beginning of partnerships. This includes exploring more theoretical understandings about concepts (such as knowledge) as well as practical understandings about planning and communication.  We argue that time spent on exploring understandings is important for a range of reasons, not least to help prevent the lead organisation dominating the construction of meaning within the partnership.

Using inception phases to explore understanding

Inception and set-up phases and meetings provide the opportunity to explore understandings, harnessing differences in opinion and perspective to best effect. However inception and set-up meetings are generally very action-orientated and focus on identifying what is to be done by whom.   The insights in the working paper point to the importance of also exploring why and how activities are undertaken as part of creating a knowledge sharing network.
  
Practical suggestions for inception meetings

The table below provides a few suggestions of questions to explore before (or at the beginning of a process) establishing a knowledge sharing network, and ideas of processes that could be used to explore them.  I have used some of these approaches but not all and  this should not be taken as “best” or even “good” practice. Instead I hope it will be food for thought about ways of addressing some of the issues raised in the paper.

Hope you enjoy the full paper and, if you are planning to set up a knowledge sharing network in partnership, that the following ideas are useful input to any inception or set up meetings.

Questions to explore
Suggestions on process

How do we understand key concepts? 

Explore understandings of key concepts by individually completing the phrase “Knowledge is…” (repeating for other concepts  eg “knowledge sharing is...”, “communicationis...”, “participation is..."). First by writing it down then moving around to compare with others.  Reflect together as a group on similarities and differences. 

What’s the purpose of this network?


Revisit the purpose and logic of the network exploring questions such as:
What is the problem this network is seeking to address? Who are the stakeholders? What will be different for them if this is a success?  Outcome based planning tools such as Outcome Mapping or Theory of Change approaches could help.

What are our motivations and expectations?

Facilitate a discussion that asks participants to share:

What I hope to gain from involvement
What my organisation hopes to gain
What I/my organisation expects to contribute
What I/my organisation expects others to contribute

What’s the organisational context in which we will deliver this? 

Use creative ways such as metaphor or pictures to explore organisational culture and values (e.g. if my organisation was a machine/animal/season/colour it would be..).
 

Draw organagrams (from memory) of each organisation, including people outside the organisation that might affect the network.  Compare with each other  

How will we actually deliver this? 

Explore what each organisation thinks they will be contributing on an ongoing basis and how they will do it.  

 For example describe “A day in the life of a KSO

How will we make decisions?


Explore scenarios of different decisions from big decisions such as adopting a new partner to small such as adding an item to a website/newsletter.  Who would be involved?

How will we work together?


Identify what existing experience partners already have of working in partnership.  One approach could be sharing stories about highs or lows of partnership working.  What kinds of partnership do they have, how is this similar or different, what works and what doesn’t? 
 

Consider generating principles and strategies for working together, including  communication methods and etiquette.

What do we expect of the lead partner?


Explore what power does the lead organisation have vis a vis the other organisations? How can this be balanced? What responsibilities does it have?

What do we do if things go wrong?


Build scenarios of what could go wrong. Explore different ideas of “wrong” then discuss how it could be addressed. Relate to principles for working together. 

How will we learn in this process?

Explore what approaches to learning and professional development each organisation has.
 

Look at simple models for learning such as “experiential learning cycle” and see how they can apply to the partnership.  Think about how to learn before and during the process looking at methods such as After Action Reviews, peer assists etc

How will we monitor and evaluate our work? 


Discuss understandings of M&E (which are often very different), partners’ experience of it, and what they expect to contribute.


Catherine Fisher was Capacity Support Coordinator for the Impact and Learning Team at IDS. She left IDS at the end of September 2012 to join Amnesty International as their new International Capacity Building Co-ordinator

Wednesday, 30 May 2012

Philosopher-craftsmen: interesting times for research communications professionals

Plato - snapshot from Raphael's The School of Athens. Image from http://drishtantoism.wordpress.com/philosophers/plato/
Plato, the Greek philosopher
By Emilie Wilson

Two exciting new publications have landed on my desk today :
(1)  Knowledge, policy and power in international development: a practical guide and the latest edition of the IDS Bulletin,
(2)  Action research for development and social change.

Knowledge, policy and power in international development: a practical guide, not a definitive model


The first, a book by researchers at the Overseas Development Institute (ODI), aims to be a "practical guide to understanding how knowledge, policy and power interact to promote or prevent change". However, the authors are quick to put in a disclaimer:

"...we acknowledge that, although some models provide useful analyses of some aspects of the interface between knowledge and policy, it is impossible to construct a single one size-fits-all template for understanding such a complex set of relationships".

That is not to say the authors aren’t aiming high: "this book seeks to provide: 
  • a state-of-the-art overview of current thinking about knowledge, policy and power in international development 
  • present empirical case studies that provide concrete examples of how these issues play out in reality 
  • offer practical guidance on the implications of this knowledge base” 
I’m looking forward to getting stuck in, and am particularly intrigued by their “Questions this section will help you to answer” approach to structuring some of the content. I’m also looking out for references to work by IDS Knowledge Services around knowledge intermediation (well, of course I am!).

Action research for development and social change


The second, edited by Danny Burns, who heads up the Participation, Power and Social Change team at IDS, is the latest edition of the IDS Bulletin.

IDS Bulletins come in a variety of shapes and sizes – some very theoretical, others with more practical examples. This one appears to provide a nice balance of both, and has a stellar cast of leading lights at IDS on action research and participatory approaches.

Again, there is a disclaimer "we have not sought to draw firm conclusions or a single 'theory of practice'" but then a helpful identification of recurrent themes around which to hang your reflections as you read along: power and complex power relations, learning, and action.

Both these works, I think demonstrate what an exciting time it is to be working in the realm of research uptake, weaving analysis into practice, and giving us communications professionals space to reflect on the impact of our work.

I’m not a development practitioner, I’m a communications professional...


In my early days at IDS, when I had more enthusiasm than experience, I remember a conversation with a colleague in which I referred to us as “development practitioners” and she responded “I am not a development practitioner, I am a librarian”. She’s quite right, in many ways – a librarian with a whole heap of experience in international development.

I guess that description could apply to me too: a communications professional experienced in international development. Just as others are engineers, agronomists, doctors, project managers...experienced in international development.

That is, we should not forget, while we muse on power, complexity and social change, that we are also master craftsmen. Our understanding of communication, our craft, is based on an understanding of human behaviour. While it needs to be nuanced by peoples culture, worldview, literacy, all manner of contextual factors - we remain craftsmen who understand what to look for and how to build it in different contexts. It provides us with a lens through which to see the world.

Hopefully, with my bedside reading all set up now for the next month, the theory (and practical guidance) will percolate into my communications practice and I can aspire (grossly paraphrasing Plato) to being a ‘philosopher-communicator’...(albeit with less beard!)

Thursday, 17 May 2012

Reflections on the K* summit: beyond K-Star-wars?

By Catherine Fisher

It was only a matter of time before someone made the KStarWars joke at the K* Conference that took place at the end of April in Canada. I’m only sorry it wasn’t me!

However, the K* Conference was notable not for its battles, but for the sense of commonality that emerged among the participants and for the momentum for future action it generated. 


The K* summit aimed to connect practitioners working across the knowledge-policy-practice interfaces to advance K* theory and practice. Its aim was to span the different sectors and contexts and different terms under which this kind of work is undertaken, for example Knowledge Mobilisation (KMb), Knowledge Sharing (KS), Knowledge Transfer and Translation (KTT).  Hence K*:  an umbrella term that attempts to bypass terminology discussions. 

This blog post provides links to some of the great reporting from the event, acknowledges some of the critiques that the event raised and points to the next steps for K*.    
The opening presentation highlighted how K* is about supporting processes of exchange and engagement between knowledge-policy-practice interfaces not the achievement of particular outcomes. It was great to hear this point made by John Lavis, who has something of a guru status in K* in health. Other important points were about learning about context and what that means, not just saying its important!
Another great metaphor courtesy of Charles Dhewa. The importance of multiple knowledges, knowledge hierarchies and the role of K* actors in helping to facilitate interactions between those knowledges was a recurring theme. E.g. see video by Laurens Klerxx talking about multiple knowledges and innovation brokers. 
As David Phipps explains in this video, participants from Canada, Ghana and Argentina were able to find considerable commonalities in their work with communities. This transnational comparison may be familiar to those of us who work in international development but it was a first for many of the Canadian participants who are doing really interesting work, for example, in government ministries or communities. I think this points to a strength of the K* movement in connecting people that might not otherwise talk.
The conference illustrated the range and scope of K* work. For example, Jacquie Brown, National Implementation Research Network who works helping communities to implement science, has learnt how this piece fits within the broader scope of K*.  For me, this seeing how different kinds of K* roles are played and how they intersect is important.  

In this video, I share some of my reflections at the time: brokering in the Canadian context including an  examples of brokering at the point of research commissioning:  power dynamics in brokering; and the way that informing role of knowledge brokering is getting a “bum rap” compared to more relational knowledge brokering work. I also get distracted by bangs, crashes and the emergence of breakfast!  

Critiques and the importance of engaging with them

The conference has generated some robust critiques. For example, Enrique Mendizabal sparked a discussion on his blog, On Think Tanks with a range of critiques including whether knowledge brokers are required, how knowledge is shared, and a critique of elitist professionalisation of this field. Scroll to the bottom of his blog post to read the responses, including mine. Meanwhile, Jap Pels argued that the nature of the debate at K* was pretty basic knowledge-sharing stuff.

I think both of these critiques raise interesting points but I think they constitute arguments For K*, not against it. K* recognises that the knowledge work is changing and proliferating, that there is considerable experience and understanding that is not shared across the different spaces in which the role is played. It aims to bring together bodies of expertise (for example that which Jaap Pels points to) to raise the game of all practitioners. It will hopefully provide spaces for debates and engagement with the kinds of critiques that Enrique raises.   

So what next for K*?


The conference generated a range of areas for further collaborative action, and plans for taking the K* initiative goes from here. 

Areas for further collaborative action included:
  • Understanding impact: a group agreed to share the tools data collection tools they are already using, I’ll be participating in this group, building on work of Knowledge Brokers Forum
  • K* in developing countries: a predominantly African group explored the particular dimensions of K* work in their contexts generating a number of action points
A group of participants gathered on Saturday to work out what next for K* as a whole. Consolidation of the K* Green Paper is considered an important next step – co-organiser  Louise Shaxson will be leading this work. There are ideas of developing a more formalised network, which will be led by UNU-INWEH in the first instance.   

UNU, who have led this process so far, remain committed and aim to get the support of the UNU governance. The World Bank has already provided financial support. Support from such international bodies is important as it will embed the international nature of this initiative, it is not without its risks!    


So to borrow again from StarWars, the force is, for now, with K*.  The scale and ambition of the initiative together with some indications of funding and high profile support suggest it has a future. However it faces both practical and fundamental challenges.

Practical challenges include maintaining ownership and momentum on behalf of the largely volunteer force taking it forward for now, identifying its niche and building connections around such a fragmented field of practice.

More fundamental challenges lie in ensuring that it really can generate value that will improve knowledge-policy-practice interfaces, rather than providing a talking shop for elitist actors.   




Catherine Fisher is a member of the K* Conference International Advisory Committee.

Thursday, 3 May 2012

More on Change: systems, principles, and learning

By Elise Wach

I am back to talk about change following on from my previous postings (Change is hard and Change is hard but not impossible) on how you change a sector, Here are some reflections from the latest IRC Triple-S learning retreat.

IRC is attempting to change the way water is provided in rural communities by:
  1. changing the way things work at the level of rural communities so that water is available to everyone indefinitely, and
  2. changing the water sector to enable this to happen. 
I think it is easy to get lost in the frameworks and theories that attempt to explain how to achieve these changes. For example, there are a number of different frameworks proposed for influencing policy and measuring that influence (for example Crewe and Young 2002, Court and Young 2003, Steven 2007, Gladwell 2000, etc.). These provide useful insights as to what might make a difference, but at the end of the day we need to remember that these are complex and unique systems that we are trying to change: so there is no ‘best practice’! 

Danny Burns’ seminar at IDS yesterday on "How Change Happens" helped remind me that while it is not necessarily labelled as such, Triple-S is essentially using a Systemic Action Research (PDF) approach: their larger (systems) view of the water sector and iterative learning processes enable them to recognise and respond to opportunities for change.

Image from: http://www.rallytorestoresanity.com 
In attempting to influence policy, for example, Triple-S is not just looking at written policy documents (although this is one piece of Triple-S work).  But they recognise that policy change results from and is indicated by changes in discourse, perceptions, agendas, networks, political contexts, and institutions.  And that a multitude of stakeholders are involved in those changes, including journalists, NGO workers, researchers, finance ministers, and even people who post on Twitter. They recognise that certain events (such as a change in government) can greatly accelerate or completely block policy changes. 

And that the right evidence and information at the right time delivered to the right people could make a difference.  So the Triple-S approach is built on the assumption that changing policy doesn’t entail following a formula but instead recognising and responding to opportunities and trigger points.

At the rural community level, Triple-S is trying to ensure that the rural water sector takes into account a variety of factors in order to ensure that water services are provided to everyone indefinitely.  So this means looking at life-cycle costs, mechanisms for transparency and accountability, possible alternative service providers, accounting for the multiple uses of water, etc. etc.  

But does viewing these issues with a systemic lens mean that we become paralysed by the complexity?  Danny Burns pointed out yesterday that the key is to focus on action rather than on consensus.  To focus on the actions that different actors can take that can change the system.  Or as Bob Williams explains in his the Ottawa Charter approach (doc), it will be a ‘strategically selected jigsaw of people and organisations doing what they are most effective at’ that will create lasting change, rather than Triple-S trying to change the sector on its own.

Triple-S isn’t trying to get consensus around a specific approach to achieving sustainable rural water supply, but is instead trying to get everyone on board with basic set of principles for sustainable services and providing a range of resources and tools and building capacities (look out for new trainings in the near future) to put those principles in action. They are leveraging existing institutions and structures, and working closely with individuals and organisations to facilitate ownership.

But getting people to wrap their heads around the concept of changing their principles is a big obstacle.  People want tools and approaches that they can go put into action, and while Triple-S is providing a range of these, success starts with viewing rural water supply completely differently: it isn’t ‘the Service Delivery Approach’ but ‘a Service Delivery Approach’. 

Another obstacle Triple-S is facing relates to the way in which evidence is perceived.  So there are people who say, ‘this is all fine and good in theory, but is it really possible? Can we really achieve both sustainability and scale? Where is the evidence?’  Evidence is a strong word.  Today, it usually refers to a call for a ‘rigorous’ approach like a randomised control trial.  But if you want to find out if services are provided forever, then how long do you have to wait for the RCT results?  And here is where I cannot resist but refer to the brilliant example of the limitations of RCTs – would you doubt that a parachute would make jumping out of a plane safer just because an RCT has not proven it?  I think this highlights the need for the development community to reflect on what we consider to be evidence.     

But I don’t think that these obstacles are insurmountable, especially given that Triple-S’ approach enables it to recognise and respond to opportunities and challenges while remaining focused.  One of the Triple-S pillars is for the rest of the rural water sector to have ‘a strong learning and adaptive capacity’.  I see this as pre-requisite for success in the other two pillars, and in the rural water sector in general.  But achieving this is....well, complex.

Friday, 30 March 2012

What should the post-2015 MDG (on water and sanitation) look like, and how would we measure it?

By Elise Wach
IDS Bulletin 43.2

On World Water Day, I had the opportunity to attend the IDS STEPS Centre launch of the IDS Bulletin on Politics and Pathways in Water and Sanitation. The discussion focused strongly around the MDGs: it was recently announced that the target for water had already been met, but there are a lot of questions about what that really means and how it was determined that we have ‘halved the proportion of the population without sustainable access to safe drinking water’.

The panellists all lambasted the fact that the goal says nothing about equity.  And while the word ‘sustainability’ is included the target, there are serious doubts about how that is actually measured.  Katharina Welle’s research, for example, revealed that neither the neither the method used by the Ministry of Water and Energy (based on infrastructure completed) nor the method used by the JMP (based on household use of water) accurately captures the access issues that people in rural areas are grappling with.

This resonates quite strongly with the work I’m doing with the International Water and Sanitation Centre (IRC) to synthesize the learning streams for the Triple-S Initiative, which is attempting to completely transform the rural water sector.  And I have been asking myself, if we wanted another round of MDGs, and if we kept the same sector-based approach (both are big ‘if’s’ and there are more), what would we want the water sector goal to look like, and how would we measure progress towards it?

I essentially worked backwards through a simplified theory of change, starting with the end goal. Based on the principles of Triple-S, I went ahead and defined the end goal to be:

Everyone has sustainable access to safe, adequate, and reliable water.

Essentially, there are five core components here (the five words in bold).  While they are all intrinsically linked to one another, let’s attempt to look at just one aspect of this: sustainability.

How do you measure sustainability?  You could go back and see if water services are still there in ten years’ time.  That’s useful, and I think it should be done (and so does Water for People) but how do we know now if water services will continue sustainably in ten years time?   

Perhaps we’d want to think about what is needed to ensure sustainability, and try to measure that. According to Triple-S, one prerequisite for ensuring that water services last is to ensure that there is the capacity, financing, and planning for major replacements in the future (i.e. not just maintenance). And there are a variety of other direct and indirect requirements for sustainability: ownership, inclusion, accountability, transparency, government capacity, and coordination to name a few.

So a big challenge would be to agree on the requirements for sustainability.  Assuming we can overcome this (daunting) hurdle, we’d then need to assess whether these are in place.  But measuring these won’t be as straightforward as measuring the number of people who live within a certain radius of a borehole.  And there are other issues as well, such as the issue of Multiple-Use Systems, as Stef Smits discusses.

It will be, in a word, complex.   

But if the issues we’re trying to address are complex (and they are - read more about "complexity" in this ODI working paper (PDF)), then it isn’t surprising that measuring progress and achievements is complex as well. 

While the simplicity of the MDGs may have helped mobilise support for development, that simplicity comes at a cost.  As Lindsey Nelson discussed in her STEPS presentation on multi-modal discourse last week, there are consequences to basing your strategy on a bumper sticker slogan.  Something to think about as we discuss the post-2015 development agenda.

Thursday, 8 March 2012

Convening research excellence and beating the budget squeeze: 15 top-tips on managing expert e-discussions

Guest post: Adrian Bannister, Eldis Communities Co-ordinator, draws on experience from IDS  Knowledge Services to argue that online discussion events can successfully connect global thinkers without costing the earth

We all know that a demanding funding environment where delivering more for less can often conflict with personal and institutional commitments to the environment and to diversity agendas.

Digital technologies have provided development institutions with many opportunities for being more effective and efficient (in a broad sense). But when it comes to the efforts to substantively engage others in co-critiquing, re-constructing and advocating for research excellence, it seems that we still rely on face-to-face gatherings.

Why aren't we making more use of the Internet for debate and discussion?

As an approach for supporting policy influence to instigate real change online discussions seem to offer benefits that real-world events can't.

For starters, they enable us:
  • to instantly connect disparate individuals from around the globe
  • to enable participants to engage around their existing commitments
  • to provide a safe(r) and empowering space for private discussions
  • to instantaneously document the event for retrospective viewing
  • to avoid the substantial time, cost and environmental overheads of bringing individuals together

Unfortunately, generating online discussion is much harder than it might seem. Despite the plethora of new tools for commenting and contributing to the web, stimulating user generated content remains a real challenge.

It is worth remembering that while the 'like button' sets the standard for easy / ubiquitous interaction across the web, in October 2011 Facebook chose to quietly remove its discussion functions completely.

A quick Google search shows that genuinely great e-discussions are rare – instead the web is littered with numerous poorly received examples with few if any comments.


What factors help make for 'great' online debates? What can we learn from each other?

I’d like to share some lessons that IDS Knowledge Services and partners have learned in our recent experience convening experts' e-discussions with prominent researchers / actors / practitioners as participants.

During 2011 and into 2012 we have used the Eldis Communities platform to host several such events on a range of topics, including: climate change, food security, philanthropy, social movements and gender mainstreaming. The events have been commissioned by the likes of Irish Aid, Oxfam and The Rockefeller Foundation.

While each is unique, they all share several commonalities. They are co-produced with partners, are funded by / contribute to wider programmes of work and are held behind closed doors.

One event focusing on Gender and Food Security* really stands out in particular in teaching us how to be highly effective at stimulating participants. In that situation, which lasted just 48 hours, a group of around 30 individuals collectively generated nearly 100 substantive contributions across 3 threads.

Top-tips for successful experts' discussions:

Here are some 'top-tips' that we learnt from doing this and other discussions (PDF). They highlight the wide range of things an e-discussion project team can / should do to maximise the chances of success.

If you don't have time to read all 15 points, here are my personal top three:
  1. Plan it with the same attention to detail that you'd give real-world events – e-discussions, like all social events, can buzz with energy or descend into an awkward silence. The time-pressure during them is intense and attempts to revive a flagging situation can look clumsy. So set things up well in advance and you'll be less likely to need to 'get someone talking' during the event itself.
  2. Scrimping on the budget is a false economy. While a successful online discussion will cost a fraction of a real-world event it pays to think carefully about the particular roles required and recruit project team as appropriate. Fund them generously (in staff time / direct costs), give them plenty of lead-in time and reserve a slot after the event for reflection.
  3. Ensure your VIPs (Very Important Participants) feel special. As they are almost certainly super busy, persuade them to give up some of their precious time by making your invitations personal (not just the greetings!), mention those who recommended them, and highlight other eminent participants who will be involved.
Lastly, do what any good host does: keep calm and carry on...


** Thanks to Susanne Turrall (convenor) and Carl Jackson (facilitator) who played critical roles in this event and helped to pull together our learning from it.**

Tuesday, 7 February 2012

Monitoring and evaluation in partnerships: why learning comes first

Guest post: Andre Ling, Research Officer/Technical Assistance with the Agricultural Learning and Impacts Network (ALINe) at the Institute of Development Studies (IDS)

The Impact and Learning Team and the ALINe project members share insights and good practice during bi-monthly Learning Labs, an afternoon of learning and reflection framed by our project work and esearch questions. Andre shares his reflections in this blog post.

Last week's Learning Lab with the Impact and Learning Team (ILT) touched on two topics that are probably relevant to just about all actors involved in development processes: partnerships and sustainability. Both are buzzwords, frequently used and misused, open to a range of interpretations and often obscuring the hard realities that confront development practice.

This post looks specifically at 'partnership' and what approaches to monitoring and evaluation (M&E) may be most appropriate in a partnership context.


The term 'partnership' frequently glosses over the complexity of inter-organisational relationships:
  • The many reasons for which organisations find themselves in a partnership to begin with
  • The power asymmetries inherent in the common model of grantor-grantees that defines many partnerships
  • The different interests, values or ideological positions of partners
  • The specific organisational development needs of different partners


Image from: http://www.ramsar.org
The prevalence of such factors can lead one to the conclusion that most partnerships are marriages of convenience as those joining in such ventures do so largely to serve their own interests. But, at the same time, it barely deserves mentioning that an individual actor can do little to address the complex problems of our times.

So how do we go on together honestly? How do we make partnerships work?

And what kind of M&E makes sense in partnership contexts?



To begin with, taking stock of power asymmetries within the partnership and mapping out the divergent interests, values, worldviews, spheres of concern and needs of partner organisations may be a good place to start. Often the leading agency in a partnership (usually a grant-maker of sorts) will have goals of achieving some kind of systemic change. The other partners (frequently grantees) may be more concerned with implementing specific activities to contribute to more localised changes and be less comfortable with confronting systemic change. Such a divergence creates a critical disjuncture, resulting in the leading agency wanting to make changes in the other partners; to make them see the same way. Failure to engage with this rift in a sensitive manner can lead to a variety of problems, for example low trust and limited ownership.

One response is, perhaps, to establish a joint M&E framework; a system of indicators and reporting requirements that can be deployed to ensure that all organisations are 'on the same page' and are held accountable to achieving desired outcomes in a standardised way. The danger here is that compliance and standardisation – both power mechanisms – ride roughshod over what might be considered the more crucial goal: learning how to work together effectively to achieve mutually desired change.

This is not to dismiss the contribution of joint M&E frameworks but rather to put them in their rightful place as servants of learning (individual, organisational or institutional) rather than formalities implemented for their own sake. To clarify, the point is that learning encompasses a far wider set of practices and activities than what usually goes by the name of M&E and, furthermore, that regular M&E is not a sufficient pre-condition for learning to take place. Just think of all the evaluation reports that have been shelved and forgotten.

An emphasis on learning prior to M&E opens the door to a potentially more diverse set of tools, techniques and practices that can be used to (a) build relationships of mutual trust; (b) reveal and question entrenched assumptions; (c) share and cultivate more systemic ways of thinking about the nature of the problems that the partnership is seeking to tackle.

This can prepare the ground for co-creating an M&E system that is both oriented toward learning and situated within a partnership culture that is supportive of learning. A significant consideration here is that this means taking the partnership itself as a unit of analysis to be monitored, evaluated and learned from and through over time.

Monday, 30 January 2012

Change is hard but not impossible with a little help from ELFs (part 2)

Guest post: We welcome back Elise Wach, Evaluation Consultant at the Institute of Development Studies (IDS)

To follow on from my post last week on how change is hard to achieve, even when you know an approach isn't working, here's an update on the Learning Retreat that the Impact and Learning Team facilitated for the IRC (International Water and Sanitation Centre) Triple-S Initiative

We kicked off the 2-day retreat by hearing reports back on the variety of learning streams that had taken place so far and assessing progress.  This led into discussions about the approach of the initiative as well as the approach of the evaluation and learning.  Were course corrections needed in order to achieve the desired outcomes of the initiative?  Were adjustments to the evaluation and learning methodologies needed in order to better capture initiative progress?

In terms of the approach of the initiative, one of the primary concerns was related to external communication - getting the right information out to the right people in a timely manner.  IRC’s key staff immediately made commitments to improve this:
  1. Set up an organisational blog to get discussions started and put information out on the web (see Emilie Wilson’s entry about the merits of blogging).  
  2. Post more information and resources on the website, even if not yet polished.  A key phrase throughout the retreat was to ‘not let the perfect be the enemy of the perfectly good’.  Resources and information may take the form of videos, slide decks, and less formal reports.
  3. Improve the layout of the website to make it more user-friendly, and the search engine tags to make it easier to find.
Other commitments were also made to improve the approach of the initiative, including a re-visit to the Theory of Change.  This will surely be discussed in detail at the next Learning Retreat which will go into more depth into the data and its synthesis and also include the advisory group for the initiative (scheduled for the end of April).

See more at http://www.waterservicesthatlast.org  

In terms of the evaluation and learning approaches, a few changes will be made to the timing and scope of data collection and analysis.  For example, given the disconnect between policy and practice discussed in last week’s entry, it was decided that in addition to analysing the policy documents of key agencies and organisations in the water sector, IRC and ILT will also analyse documents that might indicate that shifts in policies are being reflected in practice, such as calls for proposals and project reports.  That information will of course be posted on the Triple-S website in an effort to give these agencies a little extra nudge towards sustainable services that last.  



It was also determined that impact weighting for different outcomes and milestones might prove useful, along the lines of DFID’s revised approach to logframes (PDF) (though Triple-S is looking to move away from a linear/tabular format and towards more mind mapping and video).  

What was interesting to me about the retreat was the fact that none of the data discussed revealed anything incredibly new or surprising to the Triple-S team.  But for some reason, getting a group of people in a room together and setting aside time to specifically discuss progress and obstacles can be extremely effective for getting decisions made…especially if External Learning Facilitators (endearingly referred to as ELFs) are there to help the process along.

IRC has committed to starting their blog in the next week and will soon be posting more resources on their website, including the full report from the Learning Retreat (which will cover much more than I’m able to include here). As for making it easier to find the Triple-S website, you can try to Google it for yourself, but I think they’re still working on this one (unfortunately there’s a clothing company that goes by the same name!), so just in case, the "Water Services that Last" website is here.

Tuesday, 17 January 2012

Change is hard

By Elise Wach

Elise Wach is currently working with the Impact and Learning team to support the IRC International Water and Sanitataion Centre through a learning process around its Triple-S Initiative 

Penelope Beyon’s blog about failure and learning brings up some interesting and very valid points about the recent attention to failure, evaluation and learning in development. While it is essential (and quite difficult) for the development community to know when their programming is unsuccessful, and admit this, it doesn’t do any good if we don’t then learn from our failures and change our approaches so as not to repeat them.  

But how does change happen?

The IRC International Water and Sanitation Centre is in the middle of a six-year initiative which is attempting to shift the rural water sector away from a one-off infrastructure-based approach towards a Service Delivery Approach; what they term, Sustainable Service Delivery at Scale (Triple-S). How to enact change is exactly what they’re trying to figure out.

For decades, most development organisations and agencies in the rural water sector (like most sectors) did not know about, or did not want to know about, their failures. They were oblivious to the fact that the majority of their boreholes fell into disrepair within five to seven years, or that many were never even used at all.  Without this knowledge, it is easy to see why development organisations and agencies charged ahead with the same unsuccessful approaches.  

However, in a recent round of interviews I conducted with key stakeholders in the sector (as part of the Impact and Learning Team's support to IRC on this initiative),  it was overwhelmingly apparent that now, everyone in the water sector knows that the standard approach to rural water supply has been ineffective and unsustainable. It is common knowledge that the sector has been failing.  

So the rural water sector has overcome that essential, but difficult first hurdle of finding out about and acknowledging failure. 

Source unknown, but widely available
And in the most recent round of interviews, key stakeholders in the sector generally agreed that the discourse at the top – the policy-level – is starting to reflect these revelations.  But funding practices and implementation on the ground seem to have continued relatively unchanged: the same infrastructure-focused, unsuccessful approaches continue to dominate. Why?

Because change is hard.

One interviewee explained, ‘We’ve been engineered to do small-scale piecemeal interventions…so of course shifting to more of a sustainable approach at scale (vis-à-vis financial flows, regulations, norms, and standards) is going to take time.  There will be resistance to change.’

Changing approaches to realising change


To date, IRC ‘s Triple-S initiative has been attempting to accelerate changes within the sector through three main approaches:
  • Relationship-led (i.e. using champions to mobilise change)
  • Value-led (i.e. leveraging peer pressure and creating coalitions for change)
  • Evidence-led (i.e. providing proof that the current approaches don’t work and proof that other ones do)
The initiative has also been exploring the relationships between policy, funding and practice.

This week, IRC and the Impact and Learning team are holding a learning retreat to go over the findings from the most recent round of stakeholder interviews and other evaluation data.  Based on this, IRC may refine its Theory of Change and tweak its approach to help maximise the efficacy of the initiative moving forward. 

We’ll report back on the outcomes of that learning retreat next week.

In the meantime, a final thought. If more development actors followed a similar approach to IRC (i.e. if they thought through Theories of Change for their approaches and periodically revised them based on real-time evaluations and analysis), it’s not unrealistic to think that the way we work in ‘development’ would be quite different. That is, the phenomenon that Penelope termed as ‘reinventing broken wheels’ might not be as common. Change is hard, but not impossible, and it is certainly needed.

Wednesday, 11 January 2012

Are we reinventing broken wheels? Let’s talk about the ‘F’ word

By Penelope Beynon

A common saying goes "The only real failure in life is the failure to try."  I disagree.

I think the worst failure in life (and in knowledge brokering) is the repetition of an established mistake. That is to say, the worst failure is the failure to learn.

In recent months, I have come across an increasing number of websites, discussions and articles that almost celebrate failure, in an effort to foster a culture of sharing and learning from others’ mistakes. The Engineers Without Borders (EWB) website Admitting Failures is a good example. In their own words:

"By hiding our failures, we are condemning ourselves to repeat them and we are stifling innovation. In doing so, we are condemning ourselves to continue under-performance in the development sector.

Conversely, by admitting our failures – publicly sharing them not as shameful acts, but as important lessons – we contribute to a culture in development where failure is recognized as essential to success."

While I agree with the premise, often times it is not fully realised.
Image from: http://st-anley.blogspot.com


Ironically, perhaps, several of the ‘failures’ admitted on the EWB website are, in fact, examples of people’s failure to learn from past mistakes – their own and those of others. That is, they are reinventing broken wheels, sometimes under the guise of 'innovation'.



Innovation is important for progress, and with innovation comes a certain level of risk. But I think these risks need to be calculated and one of the key considerations should be a thorough investigation of whether this particular experiment is truly an innovation or whether it has already been tested elsewhere. That is, an honest commitment to learning before doing as well as learning after doing. I hear the echo of Catherine’s recent blog where challenges knowledge brokers to practice what they preach .

Lessons identified or lessons learnt? 


Learning is a big theme for the Impact and Learning Team at IDS  and we have recently been thinking a lot about the difference between a lesson identified and a lesson learned.

In our view, a lesson is only really 'learned' when the implications of the lesson are acted upon. Far too often we see After Action Reviews and evaluation documents that recite from their own experience ‘lessons’ that are insights long established internally and already documented in the experience of others (e.g. developing partnerships takes time, communication matters, etc.). Very seldom does anyone pick up that the worst failure here was not the failure to communicate but the failure to identify ahead of time that communication matters and to learn from others’ experiences about how to do it well.

One outstanding example of a lesson that was learned (albeit the hard way) is retold by Lieven Claessen, a researcher from the International Potato Centre (CIP),s  in two short videos produced the Consultative Group on International Agricultural Research (CGIAR)'s ICT-KM programme.

In the first video, Claessens identifies the lessons by bravely telling a rather sobering story about his failure to communicate research findings in a way that people likely to be affected could understand and use for decision making. Had the findings of his 2007 research been acted on, the devastating effects of the 2010 mudslides in Eastern Uganda could have been mitigated, potentially saving the lives of hundreds of people and the livelihoods of hundreds more.  In his second video, Claessens evidences his learning by telling how he has changed his approach and commitment to communicating research to ensure he does not repeat this same mistake.

I find Claessens' story deeply moving for two reasons.

Firstly, I take my hat off to anyone who owns up to their part in a failure with such devastating consequences. Especially where that failure could as easily have been passed off to someone else.

Secondly, I find the story unique in its clarity about the link between research communication and wellbeing outcomes. Or, in this case failure to communicate research and negative outcomes. Often that link is much less clear for knowledge brokering. In fact, just as it is difficult (if not impossible) to evidence attribution of development outcomes to knowledge brokering work, it is equally difficult (if not impossible) to evidence negative development outcomes to failure in the same area. Perhaps this provides something of a safety net that allows us to distance ourselves from consequences, or maybe it is one of the reasons that it is apparently so hard to talk about failure in the knowledge brokering arena.

Tuesday, 3 January 2012

Buzzing about brokers: knowledge brokers reach across silos

By Catherine Fisher

Early in December I found myself in the unusual situation of being in a room full of people talking about knowledge brokering at a conference entitled "Bridging the gap between research, policy and practice: the importance of intermediaries (knowledge brokers) in producing research impact" * organised by the ESRC Policy and Research Genomics forum .

The event brought together people from UK universities, NGOs, public bodies ranging from health to education and a sprinkling of upbeat Canadians. The development sector was well represented, with DFID the best represented of UK government departments, perhaps reflecting the emphasis placed on evidence-based policy and research impact by DFID itself and within development sector more broadly.

It was the first time I had attended a conference of this kind in the UK so I was unsure what to expect. We know that knowledge about knowledge brokering seems to be silo-ed, not crossing between sectors. There are also differences in terms used to describe this kind of work. So as a presenter I was nervous I would be stating the obvious to a crowd who knew far more than I did. As conversation and coffee flowed, my fears were allayed: I had a lot to learn but, as I reflect below, the debates in the development sector I have been involved in are not miles away from debates elsewhere and in fact have something to add.

I presented as part of a panel exploring Knowledge Brokering in Development Contexts, alongside Kirsty Neman from INASP, Ajoy Datta from ODI and Matthew Harvey from DFID ( All presentations are available on the conference webpage, our session was 3E).

Here I share 5 of my reflections from the event:

The term "knowledge brokering" encompasses a wide range of action
I was not the only person to reflect that the term "knowledge brokering" was being used differently by different people.  Many people were using "knowledge brokering" to describe what I understand to be “research communication” that is, trying to ensure a piece of research is effectively communicated so that it has impact. This is in contrast to how I understand knowledge brokering, which I see as about helping to ensure that people are able to access research when they need it and that decision-making processes are informed by a wide range of research.  Put simply,  it's the difference between seeking to change a policy or practice to reflect the findings of a piece of reserach (research impact)  as opposed to seeking to change the behaviours of those in policy processes so that they draw on a wide range of research (evidence informed policy). There are of course grey areas between these extremes, for example, knowledge brokers within universities who seek to ensure that the knowledge of that university is mobilised for the community in which they are located: the Knowledge Mobilisation Unit at York University in Canada is a great example of this kind of practice that effectively sits between the extremes I have described.

Why we need labels (even if we hate talking about them)
Which brings me to my next point! People resent the term "knowledge brokering" as much as they resent talking about labels: for an interesting debate about the value of a label see KMBeing blog. Personally, I feel that without a term to describe this kind of work we would be unable to come together to discuss it (what would you call the conference/network?!). Conversely if we use the same term to discuss totally different things we risk confusing rather than clarifying our work.  The summary of the Knowledge Brokers Forum discusssion about terms and concepts is a good attempt to clarify and understand terms.  I still feel that language is the main tool we have to communicate our ideas and that it matters!


Consideration of power and politics: development sector has something to add
I was a little nervous that the debate about knowledge brokering would be very advanced, and the insights I shared in my presentation would be stating the obvious. Yet this did not seem to be the case, many of the issues raised during plenary and earlier sessions were familiar (e.g. pros and cons of the policy brief as a communications tool, how to motivate researchers to communicate their work, etc). The presentations from development sector raised two areas in particular that did not appear in other presentations I attended. Firstly, an attempt to understand politics with big and small “p”: looking at the contexts and motivations around decision-making. Secondly, a consideration of power and equity within knowledge brokering and asking “whose knowledge counts?”

What is a good knowledge broker? A fleet-footed, cheerleading, creative therapist! 
Image credit: Mick Duncan

A highlight for me was the presentation by David Phipps (York Uni) and Sarah Morton (Centre for Research on Family and Relationships) exploring the qualities of a good knowledge broker (pdf). From their experience it is someone who is fleet-footed, a cheerleader, creative, and a therapist. That is they have soft skills or competencies rather than specific technical capacities (although they will need these too!) plus a passion for the area, tact, negotiation and commitment. Like David and Sarah, I think the soft skills of knowledge brokers are key;  a paper I wrote last year entitled Five Characteristics of Effective Intermediary Organisations (PDF) explored how these soft skills can be supported and enabled at an organisational level.


Why don’t knowledge brokers practice what they preach?
As part of a devastating critique of the ESRC “Pathways to Impact” toolkit, Dr Simon Pardoe pointed out how little reference it made to evidence from social science that is relevant to the art and science of effective knowledge brokering. This observation that knowledge brokering somehow has no need to be evidence-based itself has emerged a number of times, for example, in the summary of the Knowledge Brokers Forum discussion which recognised the need for “greater linking of theory and practice”. I wonder whether the hybrid nature of the role means there are so many potential bodies of knowledge to draw on that people don’t draw on any! Sarah Morten and David Phipps talked of their practical ways of addressing this through “practice what you preach” Community of Practice and “learning days” respectively. They have a forthcoming paper to watch out for.

Any of these areas could be a blog posting, a paper or indeed a PhD themselves – I have just skimmed the surface of a great day. I hope the enthusiasm generated and connections formed will build towards greater understanding of the theory and practice of knowledge brokering.

Links:
Archive of  tweets posted from the conference : contains some interesting thoughts and links to resources.

• The long titles of these events reflect the difficulty of describing them and the lack of shared language – check out the conference I organised in collaboration with HSRC in 2008 which laboured under the title “Locating the Power of In-between : how research brokers and intermediaries support evidence-based pro-poor policy and practice"