Tuesday, 18 February 2014

Open knowledge spells murky waters for M & E

By Ruth Goodman

In mid-January I ran a session on monitoring and evaluation at the Eldis Open Knowledge Hub Partnerships Meeting. The meeting housed a group of individuals united by a concern with opening up access to research evidence and, particularly, increasing the visibility of research from developing countries.

The partnerships meeting was undertaken as part of the Global Open Knowledge Hub (GOKH) – a 3 year DFID funded project. The vision for GOKH is that IDS, and partners, will build on their existing information services to create an open data architecture for exchange and sharing of research evidence – the so-called Hub. For insight into the issues that need to be addressed in trying to set up an open knowledge hub see Radhika Menon’s recent blog The Global Open Knowledge Hub: building a dream machine-readable world.

Our hope is that through the open data approach the partners and third-party users of the Hub will be in a position to extract and re-purpose information about research evidence that is relevant and contextual to their audiences. This in turn will contribute to research content being more visible thereby enabling otherwise unheard voices to contribute to global debate and decision making. My session on M & E then was concerned with how we can know if this is being achieved.

M & E is great. It allows you to track and evidence what works and what doesn’t so that you can learn, improve and grow. In order to reach this end though, you need to know how to evaluate your work. When it comes to approaching M&E for the Hub, the waters are murky.
Photo by Kessie-Louise Given at deviantart.com
Open data approaches are still (relatively) new and the body of evidence for M & E when working with open data, let alone the specifics of evaluating and learning from this sort of Hub model, is sparse. The traditional technical methods of tracking information on the internet fall over when you make the data open. By making data open you give up most, if not all, of the control over how your data is used, implemented and displayed. There are ways to implement tracking but these are easily circumvented, so the statistics you can obtain do not reliably represent the whole picture. So, depending on how they implement the content, if organisation A is consuming data from the hub  that organisation B has contributed to the Hub then it may be that the ‘hits’ register on organisation A’s web statistics, not organisation B’s. Even if/when we do identify the most suitable metric for measuring impact in open knowledge, as we discussed at the workshop, numbers aren’t really enough. Indeed, web metrics are unreliable at the best of times and their value lies in spotting trends in behaviour – not for demonstrating impact. To engage with quantitative data people need to be clear on what that data is telling you. If open knowledge data is not the most exciting thing in the world for you, or maybe something that you don’t quite understand, then numbers are likely to do little to inspire understanding or perceived value of open data initiatives such as the Hub. However, if you can tell a story about what the Hub has allowed users to do then people have something real to engage with. Not only will they have a better understanding of the nature of your work and the value of it but they are more likely to be motivated to care. At the workshop we discussed the potential of collating stories of use as one approach to M & E that might allow us to translate the value and challenges of open knowledge work to a wider audience.

Other possibilities we discussed were around helping and supporting each other. If partner organisation A is featuring content from organisation B, delivered by the Hub, then potentially A could tell B how many hits they are getting for your content. If doing some M & E of their own, could partner A even add a couple of questions to their user survey about partner B’s data? And what about the experiences and perceptions of those partners using the Hub? Partner organisations own reflections and observations are as important as those of users in gaining a full understanding of the value and potential of the initiative.

Moving forward, our aim is to convene an M & E working group which, among other things, could serve as a community of good practice where we can be open with each other about our evaluation efforts. By sharing our experiences of different M & E approaches and the challenges of these we can work toward a position where we can know the influence of this work, we can translate this to others in a comprehensive way, and we can start to identify what we need to do to realise the potential in this exciting new arena.


Ruth Goodman is Monitoring, Evaluation and Learning Officer at the Institute of Development Studies