Thursday 27 February 2014

Influencing and engagement: Why let research programmes have all the fun?

By James Georgalakis

Is it time for us to start applying some of the strategies for bridging research to policy used so effectively at a project level to our institutions as a whole? Could a more coherent approach to policy engagement and research communications applied cross-organisationally lead to greater impact?

There is a chasm to be crossed in many research organisations and think tanks. It is the deep gap that exists between high level institutional strategies and project based impact plans. Many project funders pile pressure onto researchers to develop theories of change and detailed research communication strategies, but there is little incentive for research organisations to look at the policy and knowledge landscapes they operate in at an institutional level. Is it really only campaigning organisations, like international NGOs with their advocacy frameworks, which need to take a holistic approach to policy engagement?

The challenge for many research producing organisations, made up of multiple centres, projects and programmes, is how can they be greater than the sum of their parts? Even academic institutions and think tanks with a clearly articulated mission of actively engaging in policy discourse risk entirely vacating key policy debates or abandoning prime influencing opportunities when certain projects come to an end.

Research programme vs institutional priorities

I recently co-facilitated a series of workshops in Nepal as part of the Think Tanks Initiative Policy Engagement and Communications South Asia Programme, with an inspiring group of researchers and communications professionals from fifteen South Asian think tanks. They were all interested in the development of institutional level engagement strategies and were simply not willing to restrict their planning to a specific project. Or as one participant put it, “Why should the research programmes have all the fun?”

They each developed a clear policy engagement goal, or set of goals, that reflected their vision of change. For some these were softer changes in the nature of the policy discourse for others quite specific changes in the direction of policy in a particular area. They then mapped their context and gained a real understanding of how change happens in relation to their hoped for long term impact. The lack of a specific set of project or research goals did not seem to dilute the richness of their discussions. But it did lead to a different set of answers. They each looked at their emerging institutional level impact strategies in relation to an earlier exercise that had assessed their capacity in relation to policy engagement and communications. Areas they needed to invest in at an institutional level whether: social media, publications or knowledge management skills, quickly emerged. As did key relationships, networks and knowledge of policy processes they needed to grow.

See the Center for Study of Science, Technology and Policy from India take a dizzying six second journey around this strategic planning process: https://vine.co/v/MZzY3xhLrz6

Breaking down barriers

This experience also helped me to reflect on IDS’ approach to institutional level strategic planning. We too have been on this journey in trying to identify a wider set of engagement priorities. Take the Post2015 debate for instance. Here is a prime example of something that cuts across projects and programmes and research centres. By actively prioritising it in a cross institutional strategy and mapping out our strengths and weaknesses and the key areas of potential engagement, whether in the media, UN processes or the UK Government and Parliament, we have been able to add real value to the work of our project teams and their partners. Some of these groups are explicitly focused on this debate, such as Participate. Others find this framing essential, using it to push their research up the agendas of key policy audiences. We have been able to create a more enabling environment for their work by actively identifying key influencing and engagement opportunities (and challenges), building relevant networks and alliances and prioritising the timely profiling and intelligent framing of their outputs.

This process has also led to a great deal of cross-organisational collaboration, breaking down the barriers between research teams, projects and multi-sited research centres. So, whilst all our engagement and communications activities remain entirely based on our research (there is not retro fitting of evidence to advocacy objectives here) we are not wholly driven by the ubiquitous project log frame which cannot always facilitate the type of policy entrepreneurship needed to engage effectively at a national or international level.

There are a wealth of academic papers, blogs, donor guides and other materials on effective research communications and the incorporation of impact strategies into projects. However, there is far less about cross institutional approaches. Some commentators claim that cross institutional strategies focused on policy outcomes are simply too broad but is it time to challenge this? I would love to hear from those who have experience in this area. We need to share our learning and explore ways that researchers and communications professionals can work together to build a strategic framework at an institutional level to support those committed to making sure their research makes a difference.

James Georgalakis, is the Head of Communications at IDS
Follow James @ www.twitter.com/bloggs74

Other posts by James Georgalakis on research communications:

The Guardian
Has Twitter killed the media star?
Marketing: still the dirty word of development?

On Think Tanks
Is it wrong to herald the death of the institutional website?
How can we make research communications stickier? 

Impact and Learning 
Digital repositories – reaching the parts other websites cannot reach

Tuesday 18 February 2014

Open knowledge spells murky waters for M & E

By Ruth Goodman

In mid-January I ran a session on monitoring and evaluation at the Eldis Open Knowledge Hub Partnerships Meeting. The meeting housed a group of individuals united by a concern with opening up access to research evidence and, particularly, increasing the visibility of research from developing countries.

The partnerships meeting was undertaken as part of the Global Open Knowledge Hub (GOKH) – a 3 year DFID funded project. The vision for GOKH is that IDS, and partners, will build on their existing information services to create an open data architecture for exchange and sharing of research evidence – the so-called Hub. For insight into the issues that need to be addressed in trying to set up an open knowledge hub see Radhika Menon’s recent blog The Global Open Knowledge Hub: building a dream machine-readable world.

Our hope is that through the open data approach the partners and third-party users of the Hub will be in a position to extract and re-purpose information about research evidence that is relevant and contextual to their audiences. This in turn will contribute to research content being more visible thereby enabling otherwise unheard voices to contribute to global debate and decision making. My session on M & E then was concerned with how we can know if this is being achieved.

M & E is great. It allows you to track and evidence what works and what doesn’t so that you can learn, improve and grow. In order to reach this end though, you need to know how to evaluate your work. When it comes to approaching M&E for the Hub, the waters are murky.
Photo by Kessie-Louise Given at deviantart.com
Open data approaches are still (relatively) new and the body of evidence for M & E when working with open data, let alone the specifics of evaluating and learning from this sort of Hub model, is sparse. The traditional technical methods of tracking information on the internet fall over when you make the data open. By making data open you give up most, if not all, of the control over how your data is used, implemented and displayed. There are ways to implement tracking but these are easily circumvented, so the statistics you can obtain do not reliably represent the whole picture. So, depending on how they implement the content, if organisation A is consuming data from the hub  that organisation B has contributed to the Hub then it may be that the ‘hits’ register on organisation A’s web statistics, not organisation B’s. Even if/when we do identify the most suitable metric for measuring impact in open knowledge, as we discussed at the workshop, numbers aren’t really enough. Indeed, web metrics are unreliable at the best of times and their value lies in spotting trends in behaviour – not for demonstrating impact. To engage with quantitative data people need to be clear on what that data is telling you. If open knowledge data is not the most exciting thing in the world for you, or maybe something that you don’t quite understand, then numbers are likely to do little to inspire understanding or perceived value of open data initiatives such as the Hub. However, if you can tell a story about what the Hub has allowed users to do then people have something real to engage with. Not only will they have a better understanding of the nature of your work and the value of it but they are more likely to be motivated to care. At the workshop we discussed the potential of collating stories of use as one approach to M & E that might allow us to translate the value and challenges of open knowledge work to a wider audience.

Other possibilities we discussed were around helping and supporting each other. If partner organisation A is featuring content from organisation B, delivered by the Hub, then potentially A could tell B how many hits they are getting for your content. If doing some M & E of their own, could partner A even add a couple of questions to their user survey about partner B’s data? And what about the experiences and perceptions of those partners using the Hub? Partner organisations own reflections and observations are as important as those of users in gaining a full understanding of the value and potential of the initiative.

Moving forward, our aim is to convene an M & E working group which, among other things, could serve as a community of good practice where we can be open with each other about our evaluation efforts. By sharing our experiences of different M & E approaches and the challenges of these we can work toward a position where we can know the influence of this work, we can translate this to others in a comprehensive way, and we can start to identify what we need to do to realise the potential in this exciting new arena.


Ruth Goodman is Monitoring, Evaluation and Learning Officer at the Institute of Development Studies

Thursday 6 February 2014

The Global Open Knowledge Hub: building a dream machine-readable world

by Radhika Menon

The word ‘open’ has long been bandied about in development circles. We have benefited in recent years from advocacy to increase open access to research articles, and open data shared by researchers or organisations. But open systems that enable websites to talk to each other (e.g. open application programming interface) have been a little harder to advance into greater use, simply because they are not built for non-technical users.

The International Initiative for Impact Evaluation (3ie)  recently joined eight other partners that are part of the new, DFID-funded and Institute of Development Studies led Global Open Knowledge Hub project to discuss several issues related to open systems. It was no surprise that all the partners spent quite a bit of time coming to their own understanding of an ‘open system’ and an ‘open hub’.

Put simply, the Global Open Knowledge Hub project will build an open system for sharing data between participating partners and with the wider world. As each of the participating partners offers knowledge services, there are thousands of research documents, articles and abstracts that are on our websites. To facilitate the sharing of these knowledge products, an open, web-based architecture will be built so that we can all just go to one place, i.e. the hub, and find high quality, diverse and relevant content on any chosen topic that is available from the partners.

To understand how the sharing works, step out of the human-readable world and step into the machine-readable world. If a machine can be programmed to search and read through the data, then the amount of data that can be processed starts to boggle the mind. The hub is a place where huge amounts of data in machine readable formats can be queried, accessed, used and combined with other data. If you are interested in climate change, one of the topics on which the hub project will focus, a huge amount of the research that exists on climate change, spread across continents, disciplines and sectors can be accessed in a matter of a few seconds. The sheer scale of it is awe inspiring. Think tons and tons of data, woven together in a kind of semantic web. This is what the web 3.0 world will look like.

All of this might sound like a grand vision. And as partners involved in pioneering work, we are aware that we need to get several things right for this vision to be realised:

Understand demand

As Edwards and Davies say in this paper, the current understanding of open data is primarily from the supply-side perspective. It’s not enough to just put out large quantities of data; we also need to get a better sense of the demand for the data. Who are our potential users? What kind of data would they need? What will they use it for? These are questions that need some serious investigation.

The IDS Knowledge Services Open Application Programming Interface is an example of a successful open system in the development sector. The Open Application Programming Interface (API) provides open access to tens of thousands of development research documents in its repository. According to Duncan Edwards, IT Innovations Manager at IDS, there is good demand for the IDS open API from both Northern and Southern development organisations.

But data has been accessed primarily via several applications - a mobile web application, regional document visualization application and a tag cloud generator. And these have been built to make the data accessible to non-technical users. So, we need more of this to happen to make the data in the hub more user-friendly and spur demand.

Get IT and content providers to work together 

These open systems are not made for the ‘non-techie’, average user. When I first looked at the Open API of a website, the programming language that came up on the screen did not make any sense to me. But there is clearly a lot that the system can throw up for generating useful content for those with the technical skills to use it. For this to happen, researchers and communicators would have to work alongside a technical team and play a more active role in the curation of data. This is the only way the potential of the system can be fully explored.

Map taxonomies

Research is often labeled according to the needs and interests of its user. So the same piece of research may be tagged as agriculture development, rural development or farmers.  In the machine-readable world, this becomes a crucial difference that prevents data on the same themes from linking to each other. The taxonomies we use to describe information change depending on the organisation, sector and country.

So for the hub, we need a system for classifying data, which maps these different languages to ensure that data on the same theme and topic can find each other and hang together.

Work out branding, attribution, licensing and copyright

How open can we be about sharing content? When our content gets used in some way e.g. featured on another website, will credit be given to the knowledge producer? If the knowledge service involves the production of summaries and abstracts of research articles, then it would be important to clarify with the original research producer on how they license others to re-use their content (e.g. creative commons).

Since research producers, knowledge service providers and funders often use web analytics as a metric for measuring success, organisations are often concerned that if their content is ‘open’, users may not ever visit their website. Thus they would be denied access to these important metrics. We need to therefore explore new ways of tracking how ‘open’ content is used beyond our own websites. Or we need to agree to share enough data so that users are directed to the originator’s website for full information.

The partners contributing to the Global Open Knowledge Hub are working through these issues. All the partners believe that development research has a crucial contribution to make to poverty reduction, but only if it is easily available and quickly accessible to users. So, what we are building together needs to become the prototype of what open systems should look like.

Radhika Menon is Senior Communication Officer at 3ie.