Showing posts with label KnowledgeBrokers. Show all posts
Showing posts with label KnowledgeBrokers. Show all posts

Thursday, 19 September 2013

The 4 c's of Google Adwords – content, context, clicks and conversions

By Alan Stanley

I manage Eldis – an online platform providing free access to international development research and policy documents. We’re a global service with roughly 45% of our users in developing countries and a strong emphasis on highlighting research produced by the smaller research organisations and networks based in the so-called global “South”. We get about half a million visitors per year.

Like most other online knowledge platforms Eldis relies heavily on Google as a source of traffic to our website (61% last year). And to do this we rely on getting our links into the listings on search engine results pages that appear because of their relevance to users search terms (referred to as natural or organic search).

Recently though, with the support of a small amount of funding from the Climate and Development Knowledge Network, we’ve been exploring the use of Google Adwords (pay-per-click adverts appearing prominently on Google results pages) to help us achieve some of our marketing and promotion objectives. This short article highlights some of what we learned from this process and links to a longer draft learning paper we’ve produced which describes the process we went through in more detail. We’re not experts - we started from pretty much zero knowledge and still have many unanswered questions. So my hope is that this short article might prompt others to share their experience from which we can all learn.

Jargon alert


Working in both international development and knowledge brokering requires a certain natural tolerance (even a slight fondness) for jargon but the world of pay-per-click advertising takes this to a whole new level. Working across a small team of three to five people we discovered that it took a few weeks of meeting fairly regularly before we could even have a fairly straightforward conversation with each other about what we’d been doing! In the end we put together a basic glossary of terms to help us (see the learning paper for the full list).



Matching content to context is key


We experimented with Adwords campaigns promoting three different Eldis services:
We ran similar campaigns in 18 different countries. What clearly generated the most clicks in the most cost effective way was marketing our country specific content (e.g. Bangladesh Country Profile) to audiences in that country (Bangladesh).

This might seem obvious – Google users in Bangladesh interested in climate change are most likely to be interested in information about climate change in Bangladesh – but something called “quality score” also comes in to play. In determining how prominently your ads will be displayed, and what you pay for that position, Google looks at how closely the text on the web page you are promoting matches search terms you have chosen to target and the text of the ad that will be displayed in the search results. A strong match gives a higher quality score which will boost the prominence / reduce the cost of your ad. Our country profiles content clearly performed better in this regard.   



Concentrate on conversions and not clicks


We began our Adwords campaigns with two broad objectives – firstly to boost traffic to our site (overall but also specifically from our priority countries) and secondly to increase the number of regular users (return visitors) using our services.

We soon found that generating large numbers of clicks was relatively straightforward and, with some tweaking of keywords, budget and how much we were willing to pay for each ad, it was possible to steadily reduce the cost-per-click and improve the cost efficiency of the campaigns.

Success! Well, no because as we did this, we were also looking at the behaviour of our new users when they arrived at our site and found that the vast majority left again almost immediately and, worse, didn’t appear to ever come back. So in other words we were paying to bring new users to our site and then disappoint them – hardly good value for money!

This led us to rethink our strategy. Firstly we refocused our campaigns to emphasise quality over quantity - to try to make sure the people that clicked on our ads were likely to be interested in what we were offering rather than just focusing on getting as many as possible within the limits of our budget. Secondly we focused on what we wanted the users to do on our site once they arrived – and re-worked the wording and presentation of our pages to reflect this.

We’re sure we still have a long way to go with this. For example one of our targets is to get new visitors to subscribe to our email newsletter (in the jargon this is known as a goal conversion). By adjusting our keywords and re-writing and re-organising our subscribe page we’ve managed to double the conversion rate. Success! Well, yes but we’re still only getting 6% of new visitors to subscribe (up from 3%!).

Google Adwords - useful but complex and time-consuming


We’ve found Google Adwords to be a useful tool but complex and time-consuming to use effectively. It’s particularly helped us to reach new audiences in countries where, without active partners or contacts, we would have struggled to use more conventional marketing approaches. It isn’t cheap – either in the cost of advertising or the level of staff time required – but we have found it to be broadly cost-effective compared to other approaches we might use. Adwords is highly geared towards the commercial world where success is measured in sales so for a non-profit operation just engaging with it has challenged us to think very differently about our whole approach to producing web-based services – from content to target audiences. That thinking in itself has been valuable and I’m pretty sure we run a better service now as a result.

* Read more about this experiencing in the draft IDS Knowledge Services learning paper “Learning from Google AdWords Marketing” by Viivi Erkkilä, Fatema Rajabali and Alan Stanley. This blog was originally published on the Knowledge Brokers Forum.

Alan Stanley is a Senior Thematic Convenor at the Institute of Development Studies, and manages the Eldis programme and services. 

Wednesday, 7 August 2013

Open data and increasing the impact of research? It's a piece of cake!

By Duncan Edwards

I talk to a lot of friends and colleagues who work in research, knowledge intermediary, and development organisations about some of the open data work I’ve been doing in relation research communications. Their usual response is “so it’s about technology?” or “open data is about governance and transparency, right?”. Well no, it’s not just about technology and it’s broader than governance and transparency.

I believe that there is real potential for open data approaches in increasing the impact of research knowledge for poverty reduction and social justice. In this post I outline how I see Open Data fitting within a theory of change of how research knowledge can influence development.

Every year thousands of datasets, reports and articles are generated about development issues. Yet much of this knowledge is kept in ‘information silos’ and remains unreachable and underused by broader development actors. Material is either not available or difficult to find online. There can be upfront fees, concerns regarding intellectual property rights, fears that institutions/practitioners don’t have the knowhow, means or time to share, or political issues within an organisation that can mean this material is not used.

What is “Open data”? What is “Linked Open Data”? 

The Open Knowledge Foundation says “a piece of content or data is open if anyone is free to use, reuse, and redistribute it — subject only, at most, to the requirement to attribute and/or share-alike.”

The Wikipedia entry for Linked Data describes it as“a method of publishing structured data so that it can be interlinked and become more useful. It builds upon standard Web technologies such as HTTP and URIs, but rather than using them to serve web pages for human readers, it extends them to share information in a way that can be read automatically by computers. This enables data from different sources to be connected and queried…. the idea is very old and is closely related to concepts including database network models, citations between scholarly articles, and controlled headings in library catalogs.

So Linked Open Data can be described as Open Data which is published in a way that can be interlinked with other datasets. Think about two data sets with country categorisation – if you publish these as linked data, you can then make the link between related content between different datasets for any given country.

For more definitions and discussion on data see Tim Davies post "Untangling the data debate: definitions and implications".


Why should Open Data be of interest to research producers? 

The way in which the Internet and technology has evolved means that instead of simply producing a website from which people can consume your content, you can open up your content so that others can make use of, and link it in new and exciting ways.

There are many theories of change which look to articulate how research evidence can affect development policy and practice. The Knowledge Services department at the Institute of Development Studies (IDS) works with a theory of change which views access to, and demand for, research knowledge, along with the capacity to engage effectively with it, as critical elements to research evidence uptake and use in relation to decision-making within development. Open Data has significant potential in relation to the ‘access to’ element of this theory of change.

Contextualisation and new spaces 

When we think about access to research knowledge – we should go beyond simply having access to a research document. Instead we must look at whether research knowledge is available in a suitable format and language, and whether it has been contextualised in a way which makes sense to an audience within a given environment.



I like to use a Data cake metaphor developed by Mark Johnstone to illustrate this - if we consider research outputs to be the data/ingredients for the cake, then we organise, summarise and catalogue this (i.e. add meta-data) to ‘bake’ into our information cake. We then present this information in a way in which we feel is most useful and “palatable” to our intended audiences with the intention they will consume it and be able to make use of new knowledge. It’s in this area that Open Data approaches can really increase the potential uptake of research – if you make your information/ content open it creates the possibility that other intermediaries can easily make use of this content to contextualise and present it to their own users in a way that is more likely to be consumed.

Essentially by opening up datasets of research materials you can reduce duplication, allow people to reuse, repurpose, remix this content in many more spaces thereby increasing the potential for research findings to be taken up and influencing change in the world.

While I see significant benefits in researchers making their outputs available and accessible in an open manner we must redress the dominance of knowledge generated in the global North. We need to continue to invest in the strengthening of intermediaries at local, national, and international levels to make use of research material and Open Data to influence positive change.

Duncan Edwards is the ICT Innovations Manager at the Institute of Development Studies (IDS) - you can follow him on Twitter: @duncan_ids

NOTE: an admission on Open Access - The original article this post is based on, “Davies, T. and Edwards, D. (2012) 'Emerging Implications of Open and Linked Data for Knowledge Sharing in Development', IDS Bulletin 43 (5) 117-127”, published in the IDS Bulletin: “New Roles for Communication in Development?”. Ironically, considering it’s subject matter, is only partially open access (two free articles per issue). But you can access this article as green open access in Open Docs - http://opendocs.ids.ac.uk/opendocs/handle/123456789/2247

Friday, 12 October 2012

Comparing research and oranges: what can we learn from value chain analysis?


By Elise Wach

A conversation with a colleague the other day about how we would communicate our research findings for a nutrition initiative struck me as remarkably similar to the conversations I held under orange trees in eastern Uganda about market research and value chain analysis a few years ago.

In Uganda, the government was promoting the cultivation of certain fruit trees based on studies that had shown which varieties were agriculturally viable.  Farmers transitioned their plots from cassava to orange trees on the assumption that there would be a market for their oranges once their trees started fruiting several years down the line. 

Obviously, to us value chain analysts, this was crazy – it was necessary to do some market research first to find out where there were opportunities for these fruits in the national, regional, or international markets, and then grow and prepare the right crops accordingly. 

What can we learn by applying value chain concepts to our research?
Image: statesymbolsusa.org
Our thinking was shaped by the countless instances of NGOs and donors promoting the production of something (whether oranges, soaps, water pumps, etc.) without doing their homework to find out if anyone might purchase them and under what conditions: whether there was an opportunity in the market for the product (e.g. will people buy the oranges to eat, or would a juicing company be interested in them?), whether product could be improved to better meet consumer needs and preferences (e.g. are Naval oranges preferred over Valencia for juicing?  What about for eating?), whether demand could be stimulated (e.g. can we promote orange juice as a healthy breakfast option to increase consumption?), etc.  Without doing this research first, there is a significant risk that the oranges that farmers produce will not bring them the returns they hoped for. 

So I wondered, is producing research first and then deciding how to communicate it afterwards the same as growing an orange and then deciding how and where it will be sold? 

We invest a substantial amount of time and resources into producing our research and for most of us, having our research reach other people is our primary concern.  

What does the value chain for research look like?

Our product, or ‘oranges’ are our research studies. Our ‘market analysis’ is our ‘audience research’.  Our ‘marketing approach’ is our ‘research uptake strategy’. Our ‘value chain analysis’ is the research we do about ‘evidence into policy’ or ‘knowledge into action’.

We work to strengthen the knowledge value chain.  We build demand for our products through increasing the demand for research and evidence.  We alter our products to our consumer needs through producing 3-page policy briefs for some and Working Papers for others.  And we create or strengthen bridges between our producers and consumers (e.g. individuals such as knowledge intermediaries / knowledge brokers or systems such as the policy support unit that IFPRI is supporting within the Ministry of Agriculture in Bangladesh).  We understand that policy decisions are complex, just as markets have long been recognised as being complex (the outputs from value chain analysis, when done well, never look like actual chains, just as a theory of change never fits into log frame boxes). 

Obviously, there are differences between research and oranges.  The shelf-life of research is clearly longer than the shelf-life of oranges, and research can be dusted off time and time again and used in a variety of ways, many of which we’re unable to anticipate.  But much of the impact of our research does rest on the timely communication of our findings.  While Andy Sumner’s research on the bottom billion will certainly facilitate a better historical understanding of poverty, I will venture to guess that he also hopes that this information will shape development policy so as to better tackle this issue. 

We do face many similar issues as our business-minded colleagues.  When is audience research necessary, and when does the ‘if we build it, they will come’ assumption apply?  Where is the line between research communication and advocacy?   How can we create demand and to what extent should we do so?  Do our ‘consumers’ have balanced information about the products available or did they only have access to the one that we produced (Catherine Fisher wrote an excellent blog about policy influence vs evidence informed policy)?  How much do we let the market dictate what we produce and how we produce it?   

Are there opportunities to apply lessons from our colleagues working in markets and value chains to our work on ‘evidence informed decision making’?  Should we be comparing research and oranges?

Elise Wach is a Consultant Evaluation & Learning Advisor with the Impact and Learning Team, at the Institute of Development Studies


Wednesday, 18 July 2012

Do policymakers in the South text more than talk?

By Simon Batchelor

In a news story today on the BBC website, the headline reads “Texting overtakes talking in UK, says Ofcom study”.  The article states that “While 58% of people communicated via texts on a daily basis in 2011, only 47% made a daily mobile call, [according to the UK's] communications industry regulator [Ofcom].”

As readers of this blog will know, we have been conducting our own study in 6 countries on the information ecosystem* of policy actors.  You can find our previous blogs, here and here.

While we asked participants in the study about what ICT technology they owned, how frequently they used the Internet, how they searched for information, and so on, we did not ask them specifically about their preferred form of mobile phone use (texting or talking).  However, the Ofcom report which prompted the BBC news story, focuses on much more than texting – in fact, it covers very similar set of questions to our own study (indeed we used the Pew US studies to help shape our own study).

For example, UK Ofcom report states that “39% of (UK) adults now own a smartphone, a 12% increase on 2010.”  How does this compare with policy actors in the South? The graph below shows that of our sample of 100 actors in each country, a similar proportion have at least one smartphone. In fact some people have more than one, since cross network calls are often expensive enough that it makes it worth carrying two phones to keep call costs down.  This is particularly true of say Ghana, where several respondents had 2 or 3 smartphones.


The UK report also states that “Tablet ownership is also on the rise, with 11% owning such a device, up from 2% last year.”  Interestingly as this blog showed, Tablet ownership among policy actors is at an equivalent level – currently at about 15% (from the updated data set). We concluded in that blog that this ownership of technology suggests that information intermediaries seeking to get evidence in front of policy actors in order to inform their decisions, should indeed be using these channels.

The Ofcom report also offers detail on internet behaviour of UK households – contrasting the behaviour of 16 to 24 year olds with others. Of course, in our study we didn’t have many 16–24 year olds as we were interviewing those in leadership and management positions.  However we did note that the younger respondents tended to do more with their smartphones and tablets than older respondents – with the exception of where older respondents let their children play with their phones. Where this occurred, the older respondents themselves had as good a knowledge of the more tricky smartphone activities such as downloading an app, or uploading a video, as the younger respondents. 

And finally the Ofcom report states that “Two thirds of internet users have accessed Facebook.”.  From our preliminary data almost 70% of our respondents have accessed internet communities. Keeping in mind that actually there are alternative social networks to Facebook on the world scene (hard to believe!), this again shows an equivalent figure. What is much more surprising from our data is that nearly 60% of those who have smartphones have accessed social networks from their phones. Unfortunately, the Ofcom report doesn’t say how people access their social network. 

Conclusion?  As we have previously noted in our work, the respondents to our survey show that technology use among policy actors in the South pretty much mirrors the average household in USA (Pew Internet and American Life Project research). With this Ofcom report on UK behaviour, we can see that our research findings also appear to mirror technology use in the UK.   

If this is so, (and this is where I am speculating) then perhaps it is reasonable to assume that the details of how people surf might also be true?  For instance, Ofcom report that “With two-thirds of internet users on Facebook, it generates almost a quarter of all referred traffic to YouTube (23.7%), in contrast to Google’s 32.3%. Facebook also refers traffic to other popular websites: BBC (11.2%), eBay (6.7%), Twitter (3.8%), and Wikipedia (3.6%).”  If policy actors are members of social networking sites, then perhaps Facebook is driving them to sites as much as their own searching of Google. If this is the case, then it becomes all the more important that social networks are used by knowledge intermediaries to bring evidence to the attention of policy actors.

Thursday, 17 May 2012

Reflections on the K* summit: beyond K-Star-wars?

By Catherine Fisher

It was only a matter of time before someone made the KStarWars joke at the K* Conference that took place at the end of April in Canada. I’m only sorry it wasn’t me!

However, the K* Conference was notable not for its battles, but for the sense of commonality that emerged among the participants and for the momentum for future action it generated. 


The K* summit aimed to connect practitioners working across the knowledge-policy-practice interfaces to advance K* theory and practice. Its aim was to span the different sectors and contexts and different terms under which this kind of work is undertaken, for example Knowledge Mobilisation (KMb), Knowledge Sharing (KS), Knowledge Transfer and Translation (KTT).  Hence K*:  an umbrella term that attempts to bypass terminology discussions. 

This blog post provides links to some of the great reporting from the event, acknowledges some of the critiques that the event raised and points to the next steps for K*.    
The opening presentation highlighted how K* is about supporting processes of exchange and engagement between knowledge-policy-practice interfaces not the achievement of particular outcomes. It was great to hear this point made by John Lavis, who has something of a guru status in K* in health. Other important points were about learning about context and what that means, not just saying its important!
Another great metaphor courtesy of Charles Dhewa. The importance of multiple knowledges, knowledge hierarchies and the role of K* actors in helping to facilitate interactions between those knowledges was a recurring theme. E.g. see video by Laurens Klerxx talking about multiple knowledges and innovation brokers. 
As David Phipps explains in this video, participants from Canada, Ghana and Argentina were able to find considerable commonalities in their work with communities. This transnational comparison may be familiar to those of us who work in international development but it was a first for many of the Canadian participants who are doing really interesting work, for example, in government ministries or communities. I think this points to a strength of the K* movement in connecting people that might not otherwise talk.
The conference illustrated the range and scope of K* work. For example, Jacquie Brown, National Implementation Research Network who works helping communities to implement science, has learnt how this piece fits within the broader scope of K*.  For me, this seeing how different kinds of K* roles are played and how they intersect is important.  

In this video, I share some of my reflections at the time: brokering in the Canadian context including an  examples of brokering at the point of research commissioning:  power dynamics in brokering; and the way that informing role of knowledge brokering is getting a “bum rap” compared to more relational knowledge brokering work. I also get distracted by bangs, crashes and the emergence of breakfast!  

Critiques and the importance of engaging with them

The conference has generated some robust critiques. For example, Enrique Mendizabal sparked a discussion on his blog, On Think Tanks with a range of critiques including whether knowledge brokers are required, how knowledge is shared, and a critique of elitist professionalisation of this field. Scroll to the bottom of his blog post to read the responses, including mine. Meanwhile, Jap Pels argued that the nature of the debate at K* was pretty basic knowledge-sharing stuff.

I think both of these critiques raise interesting points but I think they constitute arguments For K*, not against it. K* recognises that the knowledge work is changing and proliferating, that there is considerable experience and understanding that is not shared across the different spaces in which the role is played. It aims to bring together bodies of expertise (for example that which Jaap Pels points to) to raise the game of all practitioners. It will hopefully provide spaces for debates and engagement with the kinds of critiques that Enrique raises.   

So what next for K*?


The conference generated a range of areas for further collaborative action, and plans for taking the K* initiative goes from here. 

Areas for further collaborative action included:
  • Understanding impact: a group agreed to share the tools data collection tools they are already using, I’ll be participating in this group, building on work of Knowledge Brokers Forum
  • K* in developing countries: a predominantly African group explored the particular dimensions of K* work in their contexts generating a number of action points
A group of participants gathered on Saturday to work out what next for K* as a whole. Consolidation of the K* Green Paper is considered an important next step – co-organiser  Louise Shaxson will be leading this work. There are ideas of developing a more formalised network, which will be led by UNU-INWEH in the first instance.   

UNU, who have led this process so far, remain committed and aim to get the support of the UNU governance. The World Bank has already provided financial support. Support from such international bodies is important as it will embed the international nature of this initiative, it is not without its risks!    


So to borrow again from StarWars, the force is, for now, with K*.  The scale and ambition of the initiative together with some indications of funding and high profile support suggest it has a future. However it faces both practical and fundamental challenges.

Practical challenges include maintaining ownership and momentum on behalf of the largely volunteer force taking it forward for now, identifying its niche and building connections around such a fragmented field of practice.

More fundamental challenges lie in ensuring that it really can generate value that will improve knowledge-policy-practice interfaces, rather than providing a talking shop for elitist actors.   




Catherine Fisher is a member of the K* Conference International Advisory Committee.

Thursday, 26 April 2012

Policy influence or evidence-informed policy: what is the difference?

By Catherine Fisher

“We all want a culture of evidence informed policy making, don’t we?” asked Dr Ruth Nyokabi Musila from African Institute for Development Policy (AFIDEP) at the opening of her presentation at the International Conference on Evidence Informed Policy.

It was a commitment to this ideal that had united over 50 researchers from 4 continents, brought togther in Ile Ife, Nigeria, earlier this year. I was attending under the auspices of the IDS Mobilising Knowledge for Development Programme (MK4D) and had been invited to present and chair a session.

Policy influence is not the same as evidence informed policy
 
Throughout the conference I was struck by a blurring between the (admittedly closely related) concepts of research having policy influence and evidence informed policy. The difference seems pretty obvious to me but I sometimes struggle to explain it.  

Let me try this…  
  • Effective research communication (which aims to influence policy) is indicated by change in policy/process/discourse based on the research findings you are communicating.

  • Effective evidence informed policy is demonstrated by a culture (systems, processes, attitudes and behaviours) that mean that people in decision making processes regularly engage with research from a wide range of sources when formulating, implementing, reviewing policy.

And to illustrate this difference, here are two examples from the conference:

Image: http://profile.ak.fbcdn.net/

Firstly, Kakaire Ayub Kirunda, shares his learning on how to influence policy. He observed that  “while members of parliament might be an ultimate target, they hardly have time and it is their clerks and assistants who do the lion's share of their research..."

He adds that, in a conversation with Ugandan MP, Honurable Obua Denis Hamson, who also chairs the Science and Technology Committee of Parliament, about how he would want researchers to approach him with evidence, the MP suggests “Probably the easiest way is to first give me a brief summary of your research findings. We can start from there.”


Ah yes, the ubiquitous policy brief. IDS' Impact and Learning Team recently conducted some research around the effectiveness of these as a communication tool, but that is for another blog.

By contrast, an example of supporting evidence informed policy was brilliantly illustrated by Jorge Barreto. He described the creation of an “Evidence Centre” in PiriPiri, a town in a poor region of Brazil. The Centre promoted the use of health evidence locally to improve municipal decision-making process.

Over a beer the night before, Jorge had told me that infant mortality rates in Piripiri were far lower than in other similar towns, his colleague added “20 babies survive a year because of these local policies”.  

Jorge’s presentation concluded that “current efforts to improve local government’s capacity to use research evidence to define problems, find tested interventions, assessing the quality of global and local evidence and translating evidence to key stakeholders are worth continuing. This is our little contribution towards addressing the knowledge to action gap.” Not so little for those children who survive and their families.


I feel it is worth maintaining the distinction between policy influence and evidence informed policy as the activities you undertake to influence policy with research will be different to those you might undertake if you wish to bring about a culture of evidence informed policy.

Such as...Research communication versus knowledge brokering


Two areas of activity which seek to either influence policy and/or support evidence informed policy are research communication (sometimes referred to as research uptake) and knowledge brokering (sometimes referred to as knowledge mobilisation). These distinct activities also often get confused (see my earlier post Buzzing about brokers).

Working closely with IDS Knowledge Services, engaged in knowledge brokering activities, and the IDS Communications Team, focused on supporting IDS research, this is something we decided to explore in more depth at an Impact and Learning team ‘learning lab’, a reflective practice tool we’ve been using to create a space for shared learning.

Here are some notes from the lab, which focused on "desired outcomes": 

"Research Communication and Knowledge Brokering get confused because while they start from different places (one piece of evidence versus many pieces of evidence) they use similar methods and communication tools (e.g. policy briefs). However, they can be untangled again when you look at the outcomes they are trying to achieve:
  • Desired outcomes of ‘Research Communicators’ relate to a change in a specific/thematic policy or practice i.e. you know RC activities have succeeded if a specific policy decision is made
  • Desired outcome of ‘Knowledge Brokers’ relate to a change in the information-seeking and decision making behaviour of policy/practice actors i.e. you know KB activities have succeeded if decision makers consider a diverse range of evidence to inform their decisions

Importantly, power matters: in Research Communication, the relationship between the researcher (or research institution) and decision maker makes a difference to whether the decision maker gets to hear about a specific piece of evidence (e.g. informal encounters, ‘Beer Buddies’) whereas knowledge brokers, such as the IDS Knowledge Services, can work to equalise that power imbalance for less powerful researchers (or research institutions). For example, the British Library of Development Studies' work around improving access to research published in the global South

I will explore how ‘politics’ comes to play on these two strands of research uptake activity in my next blog. Meanwhile, you can follow me on Twitter @CatherineF_IDS; I'm currently at the K* Conference in Hamilton, Canada.

Wednesday, 11 January 2012

Are we reinventing broken wheels? Let’s talk about the ‘F’ word

By Penelope Beynon

A common saying goes "The only real failure in life is the failure to try."  I disagree.

I think the worst failure in life (and in knowledge brokering) is the repetition of an established mistake. That is to say, the worst failure is the failure to learn.

In recent months, I have come across an increasing number of websites, discussions and articles that almost celebrate failure, in an effort to foster a culture of sharing and learning from others’ mistakes. The Engineers Without Borders (EWB) website Admitting Failures is a good example. In their own words:

"By hiding our failures, we are condemning ourselves to repeat them and we are stifling innovation. In doing so, we are condemning ourselves to continue under-performance in the development sector.

Conversely, by admitting our failures – publicly sharing them not as shameful acts, but as important lessons – we contribute to a culture in development where failure is recognized as essential to success."

While I agree with the premise, often times it is not fully realised.
Image from: http://st-anley.blogspot.com


Ironically, perhaps, several of the ‘failures’ admitted on the EWB website are, in fact, examples of people’s failure to learn from past mistakes – their own and those of others. That is, they are reinventing broken wheels, sometimes under the guise of 'innovation'.



Innovation is important for progress, and with innovation comes a certain level of risk. But I think these risks need to be calculated and one of the key considerations should be a thorough investigation of whether this particular experiment is truly an innovation or whether it has already been tested elsewhere. That is, an honest commitment to learning before doing as well as learning after doing. I hear the echo of Catherine’s recent blog where challenges knowledge brokers to practice what they preach .

Lessons identified or lessons learnt? 


Learning is a big theme for the Impact and Learning Team at IDS  and we have recently been thinking a lot about the difference between a lesson identified and a lesson learned.

In our view, a lesson is only really 'learned' when the implications of the lesson are acted upon. Far too often we see After Action Reviews and evaluation documents that recite from their own experience ‘lessons’ that are insights long established internally and already documented in the experience of others (e.g. developing partnerships takes time, communication matters, etc.). Very seldom does anyone pick up that the worst failure here was not the failure to communicate but the failure to identify ahead of time that communication matters and to learn from others’ experiences about how to do it well.

One outstanding example of a lesson that was learned (albeit the hard way) is retold by Lieven Claessen, a researcher from the International Potato Centre (CIP),s  in two short videos produced the Consultative Group on International Agricultural Research (CGIAR)'s ICT-KM programme.

In the first video, Claessens identifies the lessons by bravely telling a rather sobering story about his failure to communicate research findings in a way that people likely to be affected could understand and use for decision making. Had the findings of his 2007 research been acted on, the devastating effects of the 2010 mudslides in Eastern Uganda could have been mitigated, potentially saving the lives of hundreds of people and the livelihoods of hundreds more.  In his second video, Claessens evidences his learning by telling how he has changed his approach and commitment to communicating research to ensure he does not repeat this same mistake.

I find Claessens' story deeply moving for two reasons.

Firstly, I take my hat off to anyone who owns up to their part in a failure with such devastating consequences. Especially where that failure could as easily have been passed off to someone else.

Secondly, I find the story unique in its clarity about the link between research communication and wellbeing outcomes. Or, in this case failure to communicate research and negative outcomes. Often that link is much less clear for knowledge brokering. In fact, just as it is difficult (if not impossible) to evidence attribution of development outcomes to knowledge brokering work, it is equally difficult (if not impossible) to evidence negative development outcomes to failure in the same area. Perhaps this provides something of a safety net that allows us to distance ourselves from consequences, or maybe it is one of the reasons that it is apparently so hard to talk about failure in the knowledge brokering arena.

Tuesday, 3 January 2012

Buzzing about brokers: knowledge brokers reach across silos

By Catherine Fisher

Early in December I found myself in the unusual situation of being in a room full of people talking about knowledge brokering at a conference entitled "Bridging the gap between research, policy and practice: the importance of intermediaries (knowledge brokers) in producing research impact" * organised by the ESRC Policy and Research Genomics forum .

The event brought together people from UK universities, NGOs, public bodies ranging from health to education and a sprinkling of upbeat Canadians. The development sector was well represented, with DFID the best represented of UK government departments, perhaps reflecting the emphasis placed on evidence-based policy and research impact by DFID itself and within development sector more broadly.

It was the first time I had attended a conference of this kind in the UK so I was unsure what to expect. We know that knowledge about knowledge brokering seems to be silo-ed, not crossing between sectors. There are also differences in terms used to describe this kind of work. So as a presenter I was nervous I would be stating the obvious to a crowd who knew far more than I did. As conversation and coffee flowed, my fears were allayed: I had a lot to learn but, as I reflect below, the debates in the development sector I have been involved in are not miles away from debates elsewhere and in fact have something to add.

I presented as part of a panel exploring Knowledge Brokering in Development Contexts, alongside Kirsty Neman from INASP, Ajoy Datta from ODI and Matthew Harvey from DFID ( All presentations are available on the conference webpage, our session was 3E).

Here I share 5 of my reflections from the event:

The term "knowledge brokering" encompasses a wide range of action
I was not the only person to reflect that the term "knowledge brokering" was being used differently by different people.  Many people were using "knowledge brokering" to describe what I understand to be “research communication” that is, trying to ensure a piece of research is effectively communicated so that it has impact. This is in contrast to how I understand knowledge brokering, which I see as about helping to ensure that people are able to access research when they need it and that decision-making processes are informed by a wide range of research.  Put simply,  it's the difference between seeking to change a policy or practice to reflect the findings of a piece of reserach (research impact)  as opposed to seeking to change the behaviours of those in policy processes so that they draw on a wide range of research (evidence informed policy). There are of course grey areas between these extremes, for example, knowledge brokers within universities who seek to ensure that the knowledge of that university is mobilised for the community in which they are located: the Knowledge Mobilisation Unit at York University in Canada is a great example of this kind of practice that effectively sits between the extremes I have described.

Why we need labels (even if we hate talking about them)
Which brings me to my next point! People resent the term "knowledge brokering" as much as they resent talking about labels: for an interesting debate about the value of a label see KMBeing blog. Personally, I feel that without a term to describe this kind of work we would be unable to come together to discuss it (what would you call the conference/network?!). Conversely if we use the same term to discuss totally different things we risk confusing rather than clarifying our work.  The summary of the Knowledge Brokers Forum discusssion about terms and concepts is a good attempt to clarify and understand terms.  I still feel that language is the main tool we have to communicate our ideas and that it matters!


Consideration of power and politics: development sector has something to add
I was a little nervous that the debate about knowledge brokering would be very advanced, and the insights I shared in my presentation would be stating the obvious. Yet this did not seem to be the case, many of the issues raised during plenary and earlier sessions were familiar (e.g. pros and cons of the policy brief as a communications tool, how to motivate researchers to communicate their work, etc). The presentations from development sector raised two areas in particular that did not appear in other presentations I attended. Firstly, an attempt to understand politics with big and small “p”: looking at the contexts and motivations around decision-making. Secondly, a consideration of power and equity within knowledge brokering and asking “whose knowledge counts?”

What is a good knowledge broker? A fleet-footed, cheerleading, creative therapist! 
Image credit: Mick Duncan

A highlight for me was the presentation by David Phipps (York Uni) and Sarah Morton (Centre for Research on Family and Relationships) exploring the qualities of a good knowledge broker (pdf). From their experience it is someone who is fleet-footed, a cheerleader, creative, and a therapist. That is they have soft skills or competencies rather than specific technical capacities (although they will need these too!) plus a passion for the area, tact, negotiation and commitment. Like David and Sarah, I think the soft skills of knowledge brokers are key;  a paper I wrote last year entitled Five Characteristics of Effective Intermediary Organisations (PDF) explored how these soft skills can be supported and enabled at an organisational level.


Why don’t knowledge brokers practice what they preach?
As part of a devastating critique of the ESRC “Pathways to Impact” toolkit, Dr Simon Pardoe pointed out how little reference it made to evidence from social science that is relevant to the art and science of effective knowledge brokering. This observation that knowledge brokering somehow has no need to be evidence-based itself has emerged a number of times, for example, in the summary of the Knowledge Brokers Forum discussion which recognised the need for “greater linking of theory and practice”. I wonder whether the hybrid nature of the role means there are so many potential bodies of knowledge to draw on that people don’t draw on any! Sarah Morten and David Phipps talked of their practical ways of addressing this through “practice what you preach” Community of Practice and “learning days” respectively. They have a forthcoming paper to watch out for.

Any of these areas could be a blog posting, a paper or indeed a PhD themselves – I have just skimmed the surface of a great day. I hope the enthusiasm generated and connections formed will build towards greater understanding of the theory and practice of knowledge brokering.

Links:
Archive of  tweets posted from the conference : contains some interesting thoughts and links to resources.

• The long titles of these events reflect the difficulty of describing them and the lack of shared language – check out the conference I organised in collaboration with HSRC in 2008 which laboured under the title “Locating the Power of In-between : how research brokers and intermediaries support evidence-based pro-poor policy and practice"

Wednesday, 2 November 2011

Exploring the black box together: evaluating the impact of knowledge brokers

Cartoon by Sidney Harris (2007)
By Catherine Fisher

I love this cartoon! 

It seems to capture the idea of the "black box" that lies between the activities knowledge brokers and intermediaries undertake and the outcomes and impacts they seek to achieve. That’s not to say that they don’t achieve outcomes in the real world, rather that the pathways by which their work brings about change are difficult to unpack and evaluate.

The Knowledge Broker’s Forum (KBF) has started exploring this "black box" of how to evaluate the impact of knowledge brokers and intermediaries in an e-discussion running from 31 October until 9 November. I am (lightly) facilitating this discussion, along with Yaso Kunaratnam from IDS Knowledge Services.

If you would like to participate, you can sign up on the forum's website, it's open to anyone with an interest in this area.

Challenges in evaluating impact

We know there are a lot of challenges to evaluating impact of knowledge brokering. Some challenges stem from the processes (psychological, social and political) in which knowledge and information bring about change, the contested nature of the relationship between research and better development results, and the challenges of identifying contribution to any changes in real world contexts. This is particularly challenging for actors that seek to convene, facilitate and connect rather than persuade or influence.

As well as these quite high level challenges, there are the very practical issues around lack of time and resources to dedicate to effectively understanding impact. These challenges are explored in a background paper (PDF) I prepared as food for thought for those taking part in the e-discussion.

Being an e-discussion amongst 400+ knowledge brokers from all over the world, I am not sure yet where discussions will go, but I am hoping that it will shed some light on the following areas:

Breadth and depth of impact and outcomes  

How far do people go to identify ultimate outcomes of knowledge brokering work? I feel we can certainly go beyond immediate impact (e.g. personal learning) to push towards what that resulted in, however I wonder if it is meaningful to start looking at human development and wellbeing indicators. It will be interesting to see how far others are going.

Understanding behaviour change

If knowledge brokering is about behaviour changes that ensure greater engagement with research evidence, how are people defining those behaviour changes and are how are they measuring them? Are we too easily impressed with stories of information use when these could in fact hide some very poor decision-making behaviours?

Opportunities for standardisation of approaches and data collection

If people have come up with ways of doing this, is there any appetite for standardising approaches to enable greater comparison of data between different knowledge brokering initiatives? This would help us build a greater understanding of the contribution of knowledge brokers beyond the scope of any one broker’s evaluation.

I’ll also be interested to explore and challenge some of my assumptions – in particular that building some kind of theory or map of change is an important starting point for defining and then seeking to evaluate impact. This has been discussed previously on this blog and is a hot topic at the moment.

Our discussion will face challenges – not least the huge variety of types of knowledge brokering and contexts in which it is undertaken may mean there is not enough common interest. But I am sure that there is a lot of experience in the group that can be brought to bear on these questions and, in 10 days time, we will have a better idea of what is known, who is keen to explore this further and and hopefully how we could move forward to develop our understanding in this area.