By Tessa Lewin
What does validity mean in an environment where bloggers and journalists are often viewed as more credible, useful or accessible sources than researchers? How are the roles of researchers and research communicators changing?
This landscape has been undergoing a significant shift in recent years.
The emergence of new technologies has been accompanied by other shifts in the politics and business of development knowledge: the understanding of what constitutes ‘expert knowledge’, a growing emphasis on process over product in research, and new understandings of what drives social change and policy influence.
With the rise of participatory and co-constructed communications have come suggestions that the rigour and ‘hard evidence’ needed to influence policy has been neglected. As some have turned back to grassroots forms of communication such as community radio, they face ambivalence from others struggling to see what is new or innovative about such ‘archaic’ approaches.
Alongside colleagues Blane Harvey and Susie Page, I have written for and edited the latest edition of the IDS Bulletin journal, entitled
New Roles for Communication in Development?.
We wanted to explore these interesting changes by drawing on the experiences of practitioners, theorists and community intermediaries from a wide range of disciplines.
We came from a range of disciplines and experiences ourselves - I'm Communications Manager for the Pathways of Women's Empowerment research programme, Blane is a Research Fellow in the Climate Change team at IDS and worked recently been working on Climate Airwaves, a community radio project, whereas Susie Page was manager for the Impact and Learning team, focused on 'how communicating research brings about change'.
The Bulletin's articles reflect the overlaps and disconnects within different fields (particularly on how new technologies, approaches and configurations of research communication are influencing the practice of development) and sit, at various points, in tension or consensus with one another. They reflect the unresolved nature of the politics and practice of research communication – and begin to map a complex picture of this arena.
We outline our thinking on this in more detail in the Bulletin's Introduction: Is development research communication coming of age? (PDF)
Over the next few months we will be inviting the contributors to this Bulletin to write a series of blog pieces, outlining and reflecting on their articles in the Bulletin.
Watch this space….
Tessa Lewin is Research Office in the Participation, Power and Social Change research team at the Institute of Development Studies. She's also Communications Manager for the Pathways of Women's Empowerment research programme consortium.
Read the full blog series..
Showing posts with label KnowledgeSharing. Show all posts
Showing posts with label KnowledgeSharing. Show all posts
Wednesday, 17 October 2012
Friday, 12 October 2012
Comparing research and oranges: what can we learn from value chain analysis?
By Elise Wach
A conversation with a colleague the other day about how we would communicate our research findings for a nutrition initiative struck me as remarkably similar to the conversations I held under orange trees in eastern Uganda about market research and value chain analysis a few years ago.
A conversation with a colleague the other day about how we would communicate our research findings for a nutrition initiative struck me as remarkably similar to the conversations I held under orange trees in eastern Uganda about market research and value chain analysis a few years ago.
In Uganda, the government was promoting the cultivation of certain
fruit trees based on studies that had shown which varieties were agriculturally
viable. Farmers transitioned their plots
from cassava to orange trees on the assumption that there would be a market for
their oranges once their trees started fruiting several years down the
line.
Obviously, to us value chain analysts, this was crazy – it
was necessary to do some market research first to find out where there were
opportunities for these fruits in the national, regional, or international
markets, and then grow and prepare
the right crops accordingly.
![]() |
What can we learn by applying value chain concepts to our research? Image: statesymbolsusa.org |
Our thinking was shaped by the countless instances of NGOs
and donors promoting the production of something (whether oranges, soaps, water
pumps, etc.) without doing their homework to find out if anyone might purchase
them and under what conditions: whether there was an opportunity in the market
for the product (e.g. will people buy the oranges to eat, or would a juicing
company be interested in them?), whether product could be improved to better meet
consumer needs and preferences (e.g. are Naval oranges preferred over Valencia
for juicing? What about for eating?),
whether demand could be stimulated (e.g. can we promote orange juice as a
healthy breakfast option to increase consumption?), etc. Without doing this research first, there is a significant risk that the oranges that farmers produce will not bring them the returns they hoped for.
So I wondered, is producing research first and then deciding
how to communicate it afterwards the same as growing an orange and then
deciding how and where it will be sold?
We invest a substantial amount of time and resources into
producing our research and for most of us, having our research reach other
people is our primary concern.
What does the value
chain for research look like?
Our product, or ‘oranges’ are our research studies. Our
‘market analysis’ is our ‘audience research’. Our ‘marketing approach’ is our ‘research
uptake strategy’. Our ‘value chain analysis’ is the research we do about
‘evidence into policy’ or ‘knowledge into action’.
We work to strengthen the knowledge value chain. We build demand for our products through
increasing the demand for research and evidence. We alter our products to our consumer needs
through producing 3-page policy briefs for some and Working Papers for others.
And we create or strengthen bridges between our producers and consumers
(e.g. individuals such as knowledge intermediaries / knowledge brokers or
systems such as the policy support unit that IFPRI
is supporting within the Ministry of Agriculture in Bangladesh). We understand that policy decisions are
complex, just as markets have long been recognised as being complex (the
outputs from value chain analysis, when done well, never look like actual
chains, just as a theory of change never fits into log frame boxes).
Obviously, there are differences between research and
oranges. The shelf-life of research is
clearly longer than the shelf-life of oranges, and research can be dusted off
time and time again and used in a variety of ways, many of which we’re unable
to anticipate. But much of the impact of
our research does rest on the timely communication of our findings. While Andy Sumner’s research on the bottom billion
will certainly facilitate a better historical understanding of poverty, I will
venture to guess that he also hopes that this information will shape
development policy so as to better tackle this issue.
We do face many similar issues as our business-minded colleagues. When is audience research necessary, and when
does the ‘if we build it, they will come’ assumption apply? Where is the line between research
communication and advocacy? How can we create demand and to what extent
should we do so? Do our ‘consumers’ have
balanced information about the products available or did they only have access
to the one that we produced (Catherine Fisher wrote an excellent
blog about policy influence vs evidence informed policy)? How much do we let the market dictate what we
produce and how we produce it?
Are there opportunities to apply lessons from our colleagues
working in markets and value chains to our work on ‘evidence informed decision
making’? Should we be comparing research
and oranges?
Elise Wach is a Consultant Evaluation & Learning Advisor with the Impact and Learning Team, at the Institute of Development Studies
Elise Wach is a Consultant Evaluation & Learning Advisor with the Impact and Learning Team, at the Institute of Development Studies
Wednesday, 26 September 2012
Laying the foundations of a knowledge sharing network

One theme that emerged in that paper is the importance of establishing understandings that will underpin effective collaboration at the beginning of partnerships. This includes exploring more theoretical understandings about concepts (such as knowledge) as well as practical understandings about planning and communication. We argue that time spent on exploring understandings is important for a range of reasons, not least to help prevent the lead organisation dominating the construction of meaning within the partnership.
Using inception phases to explore understanding
Inception and set-up phases and meetings provide the opportunity to explore understandings, harnessing differences in opinion and perspective to best effect. However inception and set-up meetings are generally very action-orientated and focus on identifying what is to be done by whom. The insights in the working paper point to the importance of also exploring why and how activities are undertaken as part of creating a knowledge sharing network.
Practical suggestions for inception meetings
The table below provides a few suggestions of questions to explore before (or at the beginning of a process) establishing a knowledge sharing network, and ideas of processes that could be used to explore them. I have used some of these approaches but not all and this should not be taken as “best” or even “good” practice. Instead I hope it will be food for thought about ways of addressing some of the issues raised in the paper.
Hope you enjoy the full paper and, if you are planning to set up a knowledge sharing network in partnership, that the following ideas are useful input to any inception or set up meetings.
Questions to explore
|
Suggestions on process
|
How do we understand key concepts?
|
Explore understandings of key concepts by individually completing the phrase “Knowledge is…” (repeating for other concepts eg “knowledge sharing is...”, “communicationis...”, “participation is..."). First by writing it down then moving around to compare with others. Reflect together as a group on similarities and differences.
|
What’s the purpose of this network?
| Revisit the purpose and logic of the network exploring questions such as:
What is the problem this network is seeking to address? Who are the stakeholders? What will be different for them if this is a success? Outcome based planning tools such as Outcome Mapping or Theory of Change approaches could help.
|
What are our motivations and expectations?
| Facilitate a discussion that asks participants to share:
What I hope to gain from involvement
What my organisation hopes to gain
What I/my organisation expects to contribute
What I/my organisation expects others to contribute
|
What’s the organisational context in which we will deliver this?
|
Use creative ways such as metaphor or pictures to explore organisational culture and values (e.g. if my organisation was a machine/animal/season/colour it would be..).
Draw organagrams (from memory) of each organisation, including people outside the organisation that might affect the network. Compare with each other |
How will we actually deliver this? | Explore what each organisation thinks they will be contributing on an ongoing basis and how they will do it. For example describe “A day in the life of a KSO” |
How will we make decisions?
| Explore scenarios of different decisions from big decisions such as adopting a new partner to small such as adding an item to a website/newsletter. Who would be involved? |
How will we work together?
| Identify what existing experience partners already have of working in partnership. One approach could be sharing stories about highs or lows of partnership working. What kinds of partnership do they have, how is this similar or different, what works and what doesn’t? Consider generating principles and strategies for working together, including communication methods and etiquette. |
What do we expect of the lead partner?
| Explore what power does the lead organisation have vis a vis the other organisations? How can this be balanced? What responsibilities does it have? |
What do we do if things go wrong?
| Build scenarios of what could go wrong. Explore different ideas of “wrong” then discuss how it could be addressed. Relate to principles for working together. |
How will we learn in this process? | Explore what approaches to learning and professional development each organisation has. Look at simple models for learning such as “experiential learning cycle” and see how they can apply to the partnership. Think about how to learn before and during the process looking at methods such as After Action Reviews, peer assists etc |
How will we monitor and evaluate our work? | Discuss understandings of M&E (which are often very different), partners’ experience of it, and what they expect to contribute. |
Catherine Fisher was Capacity Support Coordinator for the Impact and Learning Team at IDS. She left IDS at the end of September 2012 to join Amnesty International as their new International Capacity Building Co-ordinator
Thursday, 17 May 2012
Reflections on the K* summit: beyond K-Star-wars?
By Catherine Fisher
It was only a matter of time before someone made the KStarWars joke at the K* Conference that took place at the end of April in Canada. I’m only sorry it wasn’t me!
However, the K* Conference was notable not for its battles, but for the sense of commonality that emerged among the participants and for the momentum for future action it generated.
The K* summit aimed to connect practitioners working across the knowledge-policy-practice interfaces to advance K* theory and practice. Its aim was to span the different sectors and contexts and different terms under which this kind of work is undertaken, for example Knowledge Mobilisation (KMb), Knowledge Sharing (KS), Knowledge Transfer and Translation (KTT). Hence K*: an umbrella term that attempts to bypass terminology discussions.
This blog post provides links to some of the great reporting from the event, acknowledges some of the critiques that the event raised and points to the next steps for K*.
Another great metaphor courtesy of Charles Dhewa. The importance of multiple knowledges, knowledge hierarchies and the role of K* actors in helping to facilitate interactions between those knowledges was a recurring theme. E.g. see video by Laurens Klerxx talking about multiple knowledges and innovation brokers.
The conference illustrated the range and scope of K* work. For example, Jacquie Brown, National Implementation Research Network who works helping communities to implement science, has learnt how this piece fits within the broader scope of K*. For me, this seeing how different kinds of K* roles are played and how they intersect is important.
In this video, I share some of my reflections at the time: brokering in the Canadian context including an examples of brokering at the point of research commissioning: power dynamics in brokering; and the way that informing role of knowledge brokering is getting a “bum rap” compared to more relational knowledge brokering work. I also get distracted by bangs, crashes and the emergence of breakfast!
I think both of these critiques raise interesting points but I think they constitute arguments For K*, not against it. K* recognises that the knowledge work is changing and proliferating, that there is considerable experience and understanding that is not shared across the different spaces in which the role is played. It aims to bring together bodies of expertise (for example that which Jaap Pels points to) to raise the game of all practitioners. It will hopefully provide spaces for debates and engagement with the kinds of critiques that Enrique raises.
The conference generated a range of areas for further collaborative action, and plans for taking the K* initiative goes from here.
Areas for further collaborative action included:
UNU, who have led this process so far, remain committed and aim to get the support of the UNU governance. The World Bank has already provided financial support. Support from such international bodies is important as it will embed the international nature of this initiative, it is not without its risks!
So to borrow again from StarWars, the force is, for now, with K*. The scale and ambition of the initiative together with some indications of funding and high profile support suggest it has a future. However it faces both practical and fundamental challenges.
Practical challenges include maintaining ownership and momentum on behalf of the largely volunteer force taking it forward for now, identifying its niche and building connections around such a fragmented field of practice.
More fundamental challenges lie in ensuring that it really can generate value that will improve knowledge-policy-practice interfaces, rather than providing a talking shop for elitist actors.
Catherine Fisher is a member of the K* Conference International Advisory Committee.
It was only a matter of time before someone made the KStarWars joke at the K* Conference that took place at the end of April in Canada. I’m only sorry it wasn’t me!
However, the K* Conference was notable not for its battles, but for the sense of commonality that emerged among the participants and for the momentum for future action it generated.
The K* summit aimed to connect practitioners working across the knowledge-policy-practice interfaces to advance K* theory and practice. Its aim was to span the different sectors and contexts and different terms under which this kind of work is undertaken, for example Knowledge Mobilisation (KMb), Knowledge Sharing (KS), Knowledge Transfer and Translation (KTT). Hence K*: an umbrella term that attempts to bypass terminology discussions.
This blog post provides links to some of the great reporting from the event, acknowledges some of the critiques that the event raised and points to the next steps for K*.
- K* about process not outcome (blog link)
Another great metaphor courtesy of Charles Dhewa. The importance of multiple knowledges, knowledge hierarchies and the role of K* actors in helping to facilitate interactions between those knowledges was a recurring theme. E.g. see video by Laurens Klerxx talking about multiple knowledges and innovation brokers.
- Transnational comparison of K* practices (video link)
The conference illustrated the range and scope of K* work. For example, Jacquie Brown, National Implementation Research Network who works helping communities to implement science, has learnt how this piece fits within the broader scope of K*. For me, this seeing how different kinds of K* roles are played and how they intersect is important.
In this video, I share some of my reflections at the time: brokering in the Canadian context including an examples of brokering at the point of research commissioning: power dynamics in brokering; and the way that informing role of knowledge brokering is getting a “bum rap” compared to more relational knowledge brokering work. I also get distracted by bangs, crashes and the emergence of breakfast!
Critiques and the importance of engaging with them
The conference has generated some robust critiques. For example, Enrique Mendizabal sparked a discussion on his blog, On Think Tanks with a range of critiques including whether knowledge brokers are required, how knowledge is shared, and a critique of elitist professionalisation of this field. Scroll to the bottom of his blog post to read the responses, including mine. Meanwhile, Jap Pels argued that the nature of the debate at K* was pretty basic knowledge-sharing stuff.I think both of these critiques raise interesting points but I think they constitute arguments For K*, not against it. K* recognises that the knowledge work is changing and proliferating, that there is considerable experience and understanding that is not shared across the different spaces in which the role is played. It aims to bring together bodies of expertise (for example that which Jaap Pels points to) to raise the game of all practitioners. It will hopefully provide spaces for debates and engagement with the kinds of critiques that Enrique raises.
So what next for K*?
The conference generated a range of areas for further collaborative action, and plans for taking the K* initiative goes from here.
Areas for further collaborative action included:
- Understanding impact: a group agreed to share the tools data collection tools they are already using, I’ll be participating in this group, building on work of Knowledge Brokers Forum
- K* in developing countries: a predominantly African group explored the particular dimensions of K* work in their contexts generating a number of action points
UNU, who have led this process so far, remain committed and aim to get the support of the UNU governance. The World Bank has already provided financial support. Support from such international bodies is important as it will embed the international nature of this initiative, it is not without its risks!

Practical challenges include maintaining ownership and momentum on behalf of the largely volunteer force taking it forward for now, identifying its niche and building connections around such a fragmented field of practice.
More fundamental challenges lie in ensuring that it really can generate value that will improve knowledge-policy-practice interfaces, rather than providing a talking shop for elitist actors.
Catherine Fisher is a member of the K* Conference International Advisory Committee.
Thursday, 3 May 2012
More on Change: systems, principles, and learning
By Elise Wach
I am back to talk about change following on from my previous postings (Change is hard and Change is hard but not impossible) on how you change a sector, Here are some reflections from the latest IRC Triple-S learning retreat.
IRC is attempting to change the way water is provided in rural communities by:
And that the right evidence and information at the right time delivered to the right people could make a difference. So the Triple-S approach is built on the assumption that changing policy doesn’t entail following a formula but instead recognising and responding to opportunities and trigger points.
At the rural community level, Triple-S is trying to ensure that the rural water sector takes into account a variety of factors in order to ensure that water services are provided to everyone indefinitely. So this means looking at life-cycle costs, mechanisms for transparency and accountability, possible alternative service providers, accounting for the multiple uses of water, etc. etc.
But does viewing these issues with a systemic lens mean that we become paralysed by the complexity? Danny Burns pointed out yesterday that the key is to focus on action rather than on consensus. To focus on the actions that different actors can take that can change the system. Or as Bob Williams explains in his the Ottawa Charter approach (doc), it will be a ‘strategically selected jigsaw of people and organisations doing what they are most effective at’ that will create lasting change, rather than Triple-S trying to change the sector on its own.
Triple-S isn’t trying to get consensus around a specific approach to achieving sustainable rural water supply, but is instead trying to get everyone on board with basic set of principles for sustainable services and providing a range of resources and tools and building capacities (look out for new trainings in the near future) to put those principles in action. They are leveraging existing institutions and structures, and working closely with individuals and organisations to facilitate ownership.
But getting people to wrap their heads around the concept of changing their principles is a big obstacle. People want tools and approaches that they can go put into action, and while Triple-S is providing a range of these, success starts with viewing rural water supply completely differently: it isn’t ‘the Service Delivery Approach’ but ‘a Service Delivery Approach’.
Another obstacle Triple-S is facing relates to the way in which evidence is perceived. So there are people who say, ‘this is all fine and good in theory, but is it really possible? Can we really achieve both sustainability and scale? Where is the evidence?’ Evidence is a strong word. Today, it usually refers to a call for a ‘rigorous’ approach like a randomised control trial. But if you want to find out if services are provided forever, then how long do you have to wait for the RCT results? And here is where I cannot resist but refer to the brilliant example of the limitations of RCTs – would you doubt that a parachute would make jumping out of a plane safer just because an RCT has not proven it? I think this highlights the need for the development community to reflect on what we consider to be evidence.
But I don’t think that these obstacles are insurmountable, especially given that Triple-S’ approach enables it to recognise and respond to opportunities and challenges while remaining focused. One of the Triple-S pillars is for the rest of the rural water sector to have ‘a strong learning and adaptive capacity’. I see this as pre-requisite for success in the other two pillars, and in the rural water sector in general. But achieving this is....well, complex.
I am back to talk about change following on from my previous postings (Change is hard and Change is hard but not impossible) on how you change a sector, Here are some reflections from the latest IRC Triple-S learning retreat.
IRC is attempting to change the way water is provided in rural communities by:
- changing the way things work at the level of rural communities so that water is available to everyone indefinitely, and
- changing the water sector to enable this to happen.
I think it is easy to get lost in the frameworks and theories that attempt to explain how to achieve these changes. For example, there are a number of different frameworks proposed for influencing policy and measuring that influence (for example Crewe and Young 2002, Court and Young 2003, Steven 2007, Gladwell 2000, etc.). These provide useful insights as to what might make a difference, but at the end of the day we need to remember that these are complex and unique systems that we are trying to change: so there is no ‘best practice’!
Danny Burns’ seminar at IDS yesterday on "How Change Happens" helped remind me that while it is not necessarily labelled as such, Triple-S is essentially using a Systemic Action Research (PDF) approach: their larger (systems) view of the water sector and iterative learning processes enable them to recognise and respond to opportunities for change.
![]() |
Image from: http://www.rallytorestoresanity.com |
In attempting to influence policy, for example, Triple-S is not just looking at written policy documents (although this is one piece of Triple-S work). But they recognise that policy change results from and is indicated by changes in discourse, perceptions, agendas, networks, political contexts, and institutions. And that a multitude of stakeholders are involved in those changes, including journalists, NGO workers, researchers, finance ministers, and even people who post on Twitter. They recognise that certain events (such as a change in government) can greatly accelerate or completely block policy changes.
And that the right evidence and information at the right time delivered to the right people could make a difference. So the Triple-S approach is built on the assumption that changing policy doesn’t entail following a formula but instead recognising and responding to opportunities and trigger points.
At the rural community level, Triple-S is trying to ensure that the rural water sector takes into account a variety of factors in order to ensure that water services are provided to everyone indefinitely. So this means looking at life-cycle costs, mechanisms for transparency and accountability, possible alternative service providers, accounting for the multiple uses of water, etc. etc.
But does viewing these issues with a systemic lens mean that we become paralysed by the complexity? Danny Burns pointed out yesterday that the key is to focus on action rather than on consensus. To focus on the actions that different actors can take that can change the system. Or as Bob Williams explains in his the Ottawa Charter approach (doc), it will be a ‘strategically selected jigsaw of people and organisations doing what they are most effective at’ that will create lasting change, rather than Triple-S trying to change the sector on its own.
Triple-S isn’t trying to get consensus around a specific approach to achieving sustainable rural water supply, but is instead trying to get everyone on board with basic set of principles for sustainable services and providing a range of resources and tools and building capacities (look out for new trainings in the near future) to put those principles in action. They are leveraging existing institutions and structures, and working closely with individuals and organisations to facilitate ownership.
But getting people to wrap their heads around the concept of changing their principles is a big obstacle. People want tools and approaches that they can go put into action, and while Triple-S is providing a range of these, success starts with viewing rural water supply completely differently: it isn’t ‘the Service Delivery Approach’ but ‘a Service Delivery Approach’.
Another obstacle Triple-S is facing relates to the way in which evidence is perceived. So there are people who say, ‘this is all fine and good in theory, but is it really possible? Can we really achieve both sustainability and scale? Where is the evidence?’ Evidence is a strong word. Today, it usually refers to a call for a ‘rigorous’ approach like a randomised control trial. But if you want to find out if services are provided forever, then how long do you have to wait for the RCT results? And here is where I cannot resist but refer to the brilliant example of the limitations of RCTs – would you doubt that a parachute would make jumping out of a plane safer just because an RCT has not proven it? I think this highlights the need for the development community to reflect on what we consider to be evidence.
But I don’t think that these obstacles are insurmountable, especially given that Triple-S’ approach enables it to recognise and respond to opportunities and challenges while remaining focused. One of the Triple-S pillars is for the rest of the rural water sector to have ‘a strong learning and adaptive capacity’. I see this as pre-requisite for success in the other two pillars, and in the rural water sector in general. But achieving this is....well, complex.
Monday, 19 March 2012
(Still) Seeking a cure for Portal Proliferation Syndrome
By Susie Page
As co-editor to a forthcoming issue of the IDS Bulletin (autumn 2012) which will be focusing on ‘research communications’ (all facets of it!), I am currently reading about some very exciting work around this.
During our Call for Submissions, Geoff Barnard, former Head of the Information Department here at IDS, and now Head of Knowledge Management at the Climate and Development Knowledge Network (CDKN) sent me a link his blog on Seeking a Cure for Portal Proliferation Syndrome.
Geoff aptly captures the dilemma that anyone working in research communication and knowledge brokering will be familiar with – the temptation to solve some of the challenges around research communication and uptake in development policymaking and practice by gathering all the relevant research into a super, sophisticated website. The underlying assumption being – if only people could access the research (at the click of a button), then the rest will follow.
He obviously hit a nerve, as there was a stream of responses to his blog, including one from Catherine Fisher who also contributes to this blog, highlighting her work “Ten Portal Pitfalls” – I would urge you to read Geoff's blog and contribute to the debate.
Can we 'scientifically' test for what works when it comes to research uptake?
Coming from a medical background, the words “cure” and “syndrome” had immediate resonance for me: in the medical world, we are acutely aware of persistent diseases and syndromes with millions of pounds spent on testing for cures (although take it from me the health sector struggles equally with the business of research uptake! See Lomas on this, for example).
But does similar testing occur in this sector – the one that wants to get good research results out of the lab and into development policy and practice? Is it even feasible to conceive of a scientific test for something as amorphous as “knowledge” and “evidence”?
Going back to 'the cure', we should perhaps be asking whether portals are a syndrome or a symptom. If they are symptom, the problem could be that we think research is not being used in policymaking and practice because people don’t have access to it. Yet, surely the very proliferation of portals in itself highlights that this isn’t the problem – after all, how will one more portal succeed where others have failed? What do we know about the success and failures of portals? What actually do we know about the relationship between portals and research uptake?
With people still wracking their brains over measuring impact of research, there is room for some robust ‘scientific’ testing what is and isn’t effective for supporting research uptake and the place (or otherwise) of portals within this. We recently teamed up with 3ie to carry out an experiment on the effectiveness of the ubiquitous ‘policy brief’ (even more ubiquitous than portals, I would argue). The results are just beginning to come through. Watch this space for more on this – we will of course be sharing our findings with you!
As co-editor to a forthcoming issue of the IDS Bulletin (autumn 2012) which will be focusing on ‘research communications’ (all facets of it!), I am currently reading about some very exciting work around this.
During our Call for Submissions, Geoff Barnard, former Head of the Information Department here at IDS, and now Head of Knowledge Management at the Climate and Development Knowledge Network (CDKN) sent me a link his blog on Seeking a Cure for Portal Proliferation Syndrome.
Geoff aptly captures the dilemma that anyone working in research communication and knowledge brokering will be familiar with – the temptation to solve some of the challenges around research communication and uptake in development policymaking and practice by gathering all the relevant research into a super, sophisticated website. The underlying assumption being – if only people could access the research (at the click of a button), then the rest will follow.
He obviously hit a nerve, as there was a stream of responses to his blog, including one from Catherine Fisher who also contributes to this blog, highlighting her work “Ten Portal Pitfalls” – I would urge you to read Geoff's blog and contribute to the debate.
Can we 'scientifically' test for what works when it comes to research uptake?
![]() |
Image from: http://72.167.62.13/ |
But does similar testing occur in this sector – the one that wants to get good research results out of the lab and into development policy and practice? Is it even feasible to conceive of a scientific test for something as amorphous as “knowledge” and “evidence”?
Going back to 'the cure', we should perhaps be asking whether portals are a syndrome or a symptom. If they are symptom, the problem could be that we think research is not being used in policymaking and practice because people don’t have access to it. Yet, surely the very proliferation of portals in itself highlights that this isn’t the problem – after all, how will one more portal succeed where others have failed? What do we know about the success and failures of portals? What actually do we know about the relationship between portals and research uptake?
With people still wracking their brains over measuring impact of research, there is room for some robust ‘scientific’ testing what is and isn’t effective for supporting research uptake and the place (or otherwise) of portals within this. We recently teamed up with 3ie to carry out an experiment on the effectiveness of the ubiquitous ‘policy brief’ (even more ubiquitous than portals, I would argue). The results are just beginning to come through. Watch this space for more on this – we will of course be sharing our findings with you!
Thursday, 8 March 2012
Convening research excellence and beating the budget squeeze: 15 top-tips on managing expert e-discussions
Guest post: Adrian Bannister, Eldis Communities Co-ordinator, draws on experience from IDS Knowledge Services to argue that online discussion events can successfully connect global thinkers without costing the earth
We all know that a demanding funding environment where delivering more for less can often conflict with personal and institutional commitments to the environment and to diversity agendas.
Digital technologies have provided development institutions with many opportunities for being more effective and efficient (in a broad sense). But when it comes to the efforts to substantively engage others in co-critiquing, re-constructing and advocating for research excellence, it seems that we still rely on face-to-face gatherings.
Why aren't we making more use of the Internet for debate and discussion?
As an approach for supporting policy influence to instigate real change online discussions seem to offer benefits that real-world events can't.
For starters, they enable us:

Unfortunately, generating online discussion is much harder than it might seem. Despite the plethora of new tools for commenting and contributing to the web, stimulating user generated content remains a real challenge.
It is worth remembering that while the 'like button' sets the standard for easy / ubiquitous interaction across the web, in October 2011 Facebook chose to quietly remove its discussion functions completely.
A quick Google search shows that genuinely great e-discussions are rare – instead the web is littered with numerous poorly received examples with few if any comments.
What factors help make for 'great' online debates? What can we learn from each other?
I’d like to share some lessons that IDS Knowledge Services and partners have learned in our recent experience convening experts' e-discussions with prominent researchers / actors / practitioners as participants.
During 2011 and into 2012 we have used the Eldis Communities platform to host several such events on a range of topics, including: climate change, food security, philanthropy, social movements and gender mainstreaming. The events have been commissioned by the likes of Irish Aid, Oxfam and The Rockefeller Foundation.
While each is unique, they all share several commonalities. They are co-produced with partners, are funded by / contribute to wider programmes of work and are held behind closed doors.
One event focusing on Gender and Food Security* really stands out in particular in teaching us how to be highly effective at stimulating participants. In that situation, which lasted just 48 hours, a group of around 30 individuals collectively generated nearly 100 substantive contributions across 3 threads.
Top-tips for successful experts' discussions:
Here are some 'top-tips' that we learnt from doing this and other discussions (PDF). They highlight the wide range of things an e-discussion project team can / should do to maximise the chances of success.
If you don't have time to read all 15 points, here are my personal top three:
** Thanks to Susanne Turrall (convenor) and Carl Jackson (facilitator) who played critical roles in this event and helped to pull together our learning from it.**
We all know that a demanding funding environment where delivering more for less can often conflict with personal and institutional commitments to the environment and to diversity agendas.
Digital technologies have provided development institutions with many opportunities for being more effective and efficient (in a broad sense). But when it comes to the efforts to substantively engage others in co-critiquing, re-constructing and advocating for research excellence, it seems that we still rely on face-to-face gatherings.
Why aren't we making more use of the Internet for debate and discussion?
As an approach for supporting policy influence to instigate real change online discussions seem to offer benefits that real-world events can't.
For starters, they enable us:
- to instantly connect disparate individuals from around the globe
- to enable participants to engage around their existing commitments
- to provide a safe(r) and empowering space for private discussions
- to instantaneously document the event for retrospective viewing
- to avoid the substantial time, cost and environmental overheads of bringing individuals together
Unfortunately, generating online discussion is much harder than it might seem. Despite the plethora of new tools for commenting and contributing to the web, stimulating user generated content remains a real challenge.
It is worth remembering that while the 'like button' sets the standard for easy / ubiquitous interaction across the web, in October 2011 Facebook chose to quietly remove its discussion functions completely.
A quick Google search shows that genuinely great e-discussions are rare – instead the web is littered with numerous poorly received examples with few if any comments.
What factors help make for 'great' online debates? What can we learn from each other?
I’d like to share some lessons that IDS Knowledge Services and partners have learned in our recent experience convening experts' e-discussions with prominent researchers / actors / practitioners as participants.
During 2011 and into 2012 we have used the Eldis Communities platform to host several such events on a range of topics, including: climate change, food security, philanthropy, social movements and gender mainstreaming. The events have been commissioned by the likes of Irish Aid, Oxfam and The Rockefeller Foundation.
While each is unique, they all share several commonalities. They are co-produced with partners, are funded by / contribute to wider programmes of work and are held behind closed doors.
One event focusing on Gender and Food Security* really stands out in particular in teaching us how to be highly effective at stimulating participants. In that situation, which lasted just 48 hours, a group of around 30 individuals collectively generated nearly 100 substantive contributions across 3 threads.
Top-tips for successful experts' discussions:
Here are some 'top-tips' that we learnt from doing this and other discussions (PDF). They highlight the wide range of things an e-discussion project team can / should do to maximise the chances of success.
If you don't have time to read all 15 points, here are my personal top three:
- Plan it with the same attention to detail that you'd give real-world events – e-discussions, like all social events, can buzz with energy or descend into an awkward silence. The time-pressure during them is intense and attempts to revive a flagging situation can look clumsy. So set things up well in advance and you'll be less likely to need to 'get someone talking' during the event itself.
- Scrimping on the budget is a false economy. While a successful online discussion will cost a fraction of a real-world event it pays to think carefully about the particular roles required and recruit project team as appropriate. Fund them generously (in staff time / direct costs), give them plenty of lead-in time and reserve a slot after the event for reflection.
- Ensure your VIPs (Very Important Participants) feel special. As they are almost certainly super busy, persuade them to give up some of their precious time by making your invitations personal (not just the greetings!), mention those who recommended them, and highlight other eminent participants who will be involved.
** Thanks to Susanne Turrall (convenor) and Carl Jackson (facilitator) who played critical roles in this event and helped to pull together our learning from it.**
Tuesday, 3 January 2012
Buzzing about brokers: knowledge brokers reach across silos
By Catherine Fisher
Early in December I found myself in the unusual situation of being in a room full of people talking about knowledge brokering at a conference entitled "Bridging the gap between research, policy and practice: the importance of intermediaries (knowledge brokers) in producing research impact" * organised by the ESRC Policy and Research Genomics forum .
The event brought together people from UK universities, NGOs, public bodies ranging from health to education and a sprinkling of upbeat Canadians. The development sector was well represented, with DFID the best represented of UK government departments, perhaps reflecting the emphasis placed on evidence-based policy and research impact by DFID itself and within development sector more broadly.
It was the first time I had attended a conference of this kind in the UK so I was unsure what to expect. We know that knowledge about knowledge brokering seems to be silo-ed, not crossing between sectors. There are also differences in terms used to describe this kind of work. So as a presenter I was nervous I would be stating the obvious to a crowd who knew far more than I did. As conversation and coffee flowed, my fears were allayed: I had a lot to learn but, as I reflect below, the debates in the development sector I have been involved in are not miles away from debates elsewhere and in fact have something to add.
I presented as part of a panel exploring Knowledge Brokering in Development Contexts, alongside Kirsty Neman from INASP, Ajoy Datta from ODI and Matthew Harvey from DFID ( All presentations are available on the conference webpage, our session was 3E).
Here I share 5 of my reflections from the event:
The term "knowledge brokering" encompasses a wide range of action
I was not the only person to reflect that the term "knowledge brokering" was being used differently by different people. Many people were using "knowledge brokering" to describe what I understand to be “research communication” that is, trying to ensure a piece of research is effectively communicated so that it has impact. This is in contrast to how I understand knowledge brokering, which I see as about helping to ensure that people are able to access research when they need it and that decision-making processes are informed by a wide range of research. Put simply, it's the difference between seeking to change a policy or practice to reflect the findings of a piece of reserach (research impact) as opposed to seeking to change the behaviours of those in policy processes so that they draw on a wide range of research (evidence informed policy). There are of course grey areas between these extremes, for example, knowledge brokers within universities who seek to ensure that the knowledge of that university is mobilised for the community in which they are located: the Knowledge Mobilisation Unit at York University in Canada is a great example of this kind of practice that effectively sits between the extremes I have described.
Why we need labels (even if we hate talking about them)
Which brings me to my next point! People resent the term "knowledge brokering" as much as they resent talking about labels: for an interesting debate about the value of a label see KMBeing blog. Personally, I feel that without a term to describe this kind of work we would be unable to come together to discuss it (what would you call the conference/network?!). Conversely if we use the same term to discuss totally different things we risk confusing rather than clarifying our work. The summary of the Knowledge Brokers Forum discusssion about terms and concepts is a good attempt to clarify and understand terms. I still feel that language is the main tool we have to communicate our ideas and that it matters!
Consideration of power and politics: development sector has something to add
I was a little nervous that the debate about knowledge brokering would be very advanced, and the insights I shared in my presentation would be stating the obvious. Yet this did not seem to be the case, many of the issues raised during plenary and earlier sessions were familiar (e.g. pros and cons of the policy brief as a communications tool, how to motivate researchers to communicate their work, etc). The presentations from development sector raised two areas in particular that did not appear in other presentations I attended. Firstly, an attempt to understand politics with big and small “p”: looking at the contexts and motivations around decision-making. Secondly, a consideration of power and equity within knowledge brokering and asking “whose knowledge counts?”
What is a good knowledge broker? A fleet-footed, cheerleading, creative therapist!
A highlight for me was the presentation by David Phipps (York Uni) and Sarah Morton (Centre for Research on Family and Relationships) exploring the qualities of a good knowledge broker (pdf). From their experience it is someone who is fleet-footed, a cheerleader, creative, and a therapist. That is they have soft skills or competencies rather than specific technical capacities (although they will need these too!) plus a passion for the area, tact, negotiation and commitment. Like David and Sarah, I think the soft skills of knowledge brokers are key; a paper I wrote last year entitled Five Characteristics of Effective Intermediary Organisations (PDF) explored how these soft skills can be supported and enabled at an organisational level.
Why don’t knowledge brokers practice what they preach?
As part of a devastating critique of the ESRC “Pathways to Impact” toolkit, Dr Simon Pardoe pointed out how little reference it made to evidence from social science that is relevant to the art and science of effective knowledge brokering. This observation that knowledge brokering somehow has no need to be evidence-based itself has emerged a number of times, for example, in the summary of the Knowledge Brokers Forum discussion which recognised the need for “greater linking of theory and practice”. I wonder whether the hybrid nature of the role means there are so many potential bodies of knowledge to draw on that people don’t draw on any! Sarah Morten and David Phipps talked of their practical ways of addressing this through “practice what you preach” Community of Practice and “learning days” respectively. They have a forthcoming paper to watch out for.
Any of these areas could be a blog posting, a paper or indeed a PhD themselves – I have just skimmed the surface of a great day. I hope the enthusiasm generated and connections formed will build towards greater understanding of the theory and practice of knowledge brokering.
Links:
Archive of tweets posted from the conference : contains some interesting thoughts and links to resources.
• The long titles of these events reflect the difficulty of describing them and the lack of shared language – check out the conference I organised in collaboration with HSRC in 2008 which laboured under the title “Locating the Power of In-between : how research brokers and intermediaries support evidence-based pro-poor policy and practice"
Early in December I found myself in the unusual situation of being in a room full of people talking about knowledge brokering at a conference entitled "Bridging the gap between research, policy and practice: the importance of intermediaries (knowledge brokers) in producing research impact" * organised by the ESRC Policy and Research Genomics forum .
The event brought together people from UK universities, NGOs, public bodies ranging from health to education and a sprinkling of upbeat Canadians. The development sector was well represented, with DFID the best represented of UK government departments, perhaps reflecting the emphasis placed on evidence-based policy and research impact by DFID itself and within development sector more broadly.
It was the first time I had attended a conference of this kind in the UK so I was unsure what to expect. We know that knowledge about knowledge brokering seems to be silo-ed, not crossing between sectors. There are also differences in terms used to describe this kind of work. So as a presenter I was nervous I would be stating the obvious to a crowd who knew far more than I did. As conversation and coffee flowed, my fears were allayed: I had a lot to learn but, as I reflect below, the debates in the development sector I have been involved in are not miles away from debates elsewhere and in fact have something to add.
I presented as part of a panel exploring Knowledge Brokering in Development Contexts, alongside Kirsty Neman from INASP, Ajoy Datta from ODI and Matthew Harvey from DFID ( All presentations are available on the conference webpage, our session was 3E).
Here I share 5 of my reflections from the event:
The term "knowledge brokering" encompasses a wide range of action
I was not the only person to reflect that the term "knowledge brokering" was being used differently by different people. Many people were using "knowledge brokering" to describe what I understand to be “research communication” that is, trying to ensure a piece of research is effectively communicated so that it has impact. This is in contrast to how I understand knowledge brokering, which I see as about helping to ensure that people are able to access research when they need it and that decision-making processes are informed by a wide range of research. Put simply, it's the difference between seeking to change a policy or practice to reflect the findings of a piece of reserach (research impact) as opposed to seeking to change the behaviours of those in policy processes so that they draw on a wide range of research (evidence informed policy). There are of course grey areas between these extremes, for example, knowledge brokers within universities who seek to ensure that the knowledge of that university is mobilised for the community in which they are located: the Knowledge Mobilisation Unit at York University in Canada is a great example of this kind of practice that effectively sits between the extremes I have described.
Why we need labels (even if we hate talking about them)
Which brings me to my next point! People resent the term "knowledge brokering" as much as they resent talking about labels: for an interesting debate about the value of a label see KMBeing blog. Personally, I feel that without a term to describe this kind of work we would be unable to come together to discuss it (what would you call the conference/network?!). Conversely if we use the same term to discuss totally different things we risk confusing rather than clarifying our work. The summary of the Knowledge Brokers Forum discusssion about terms and concepts is a good attempt to clarify and understand terms. I still feel that language is the main tool we have to communicate our ideas and that it matters!
Consideration of power and politics: development sector has something to add
I was a little nervous that the debate about knowledge brokering would be very advanced, and the insights I shared in my presentation would be stating the obvious. Yet this did not seem to be the case, many of the issues raised during plenary and earlier sessions were familiar (e.g. pros and cons of the policy brief as a communications tool, how to motivate researchers to communicate their work, etc). The presentations from development sector raised two areas in particular that did not appear in other presentations I attended. Firstly, an attempt to understand politics with big and small “p”: looking at the contexts and motivations around decision-making. Secondly, a consideration of power and equity within knowledge brokering and asking “whose knowledge counts?”
What is a good knowledge broker? A fleet-footed, cheerleading, creative therapist!
![]() |
Image credit: Mick Duncan |
A highlight for me was the presentation by David Phipps (York Uni) and Sarah Morton (Centre for Research on Family and Relationships) exploring the qualities of a good knowledge broker (pdf). From their experience it is someone who is fleet-footed, a cheerleader, creative, and a therapist. That is they have soft skills or competencies rather than specific technical capacities (although they will need these too!) plus a passion for the area, tact, negotiation and commitment. Like David and Sarah, I think the soft skills of knowledge brokers are key; a paper I wrote last year entitled Five Characteristics of Effective Intermediary Organisations (PDF) explored how these soft skills can be supported and enabled at an organisational level.
Why don’t knowledge brokers practice what they preach?
As part of a devastating critique of the ESRC “Pathways to Impact” toolkit, Dr Simon Pardoe pointed out how little reference it made to evidence from social science that is relevant to the art and science of effective knowledge brokering. This observation that knowledge brokering somehow has no need to be evidence-based itself has emerged a number of times, for example, in the summary of the Knowledge Brokers Forum discussion which recognised the need for “greater linking of theory and practice”. I wonder whether the hybrid nature of the role means there are so many potential bodies of knowledge to draw on that people don’t draw on any! Sarah Morten and David Phipps talked of their practical ways of addressing this through “practice what you preach” Community of Practice and “learning days” respectively. They have a forthcoming paper to watch out for.
Any of these areas could be a blog posting, a paper or indeed a PhD themselves – I have just skimmed the surface of a great day. I hope the enthusiasm generated and connections formed will build towards greater understanding of the theory and practice of knowledge brokering.
Links:
Archive of tweets posted from the conference : contains some interesting thoughts and links to resources.
• The long titles of these events reflect the difficulty of describing them and the lack of shared language – check out the conference I organised in collaboration with HSRC in 2008 which laboured under the title “Locating the Power of In-between : how research brokers and intermediaries support evidence-based pro-poor policy and practice"
Wednesday, 2 November 2011
Exploring the black box together: evaluating the impact of knowledge brokers
![]() |
Cartoon by Sidney Harris (2007) |
I love this cartoon!
It seems to capture the idea of the "black box" that lies between the activities knowledge brokers and intermediaries undertake and the outcomes and impacts they seek to achieve. That’s not to say that they don’t achieve outcomes in the real world, rather that the pathways by which their work brings about change are difficult to unpack and evaluate.
The Knowledge Broker’s Forum (KBF) has started exploring this "black box" of how to evaluate the impact of knowledge brokers and intermediaries in an e-discussion running from 31 October until 9 November. I am (lightly) facilitating this discussion, along with Yaso Kunaratnam from IDS Knowledge Services.
If you would like to participate, you can sign up on the forum's website, it's open to anyone with an interest in this area.
Challenges in evaluating impact
We know there are a lot of challenges to evaluating impact of knowledge brokering. Some challenges stem from the processes (psychological, social and political) in which knowledge and information bring about change, the contested nature of the relationship between research and better development results, and the challenges of identifying contribution to any changes in real world contexts. This is particularly challenging for actors that seek to convene, facilitate and connect rather than persuade or influence.
As well as these quite high level challenges, there are the very practical issues around lack of time and resources to dedicate to effectively understanding impact. These challenges are explored in a background paper (PDF) I prepared as food for thought for those taking part in the e-discussion.
Being an e-discussion amongst 400+ knowledge brokers from all over the world, I am not sure yet where discussions will go, but I am hoping that it will shed some light on the following areas:
Breadth and depth of impact and outcomes
How far do people go to identify ultimate outcomes of knowledge brokering work? I feel we can certainly go beyond immediate impact (e.g. personal learning) to push towards what that resulted in, however I wonder if it is meaningful to start looking at human development and wellbeing indicators. It will be interesting to see how far others are going.
Understanding behaviour change
If knowledge brokering is about behaviour changes that ensure greater engagement with research evidence, how are people defining those behaviour changes and are how are they measuring them? Are we too easily impressed with stories of information use when these could in fact hide some very poor decision-making behaviours?
Opportunities for standardisation of approaches and data collection
If people have come up with ways of doing this, is there any appetite for standardising approaches to enable greater comparison of data between different knowledge brokering initiatives? This would help us build a greater understanding of the contribution of knowledge brokers beyond the scope of any one broker’s evaluation.
I’ll also be interested to explore and challenge some of my assumptions – in particular that building some kind of theory or map of change is an important starting point for defining and then seeking to evaluate impact. This has been discussed previously on this blog and is a hot topic at the moment.
Our discussion will face challenges – not least the huge variety of types of knowledge brokering and contexts in which it is undertaken may mean there is not enough common interest. But I am sure that there is a lot of experience in the group that can be brought to bear on these questions and, in 10 days time, we will have a better idea of what is known, who is keen to explore this further and and hopefully how we could move forward to develop our understanding in this area.
Subscribe to:
Posts (Atom)